BS::thread_pool : Eine schnelle, leichte und benutzerfreundliche C ++ 17 Thread Poolbibliothek Von Barak Shoshany
E -Mail: [email protected]
Website: https://baraksh.com/
Github: https://github.com/bshoshany
Dies ist die vollständige Dokumentation für V4.1.0 der Bibliothek, die am 2024-03-22 veröffentlicht wurde.
BS::multi_future<T>BS::synced_stream zu einem StreamBS::timerBS::signallerBS_thread_pool.hpp )BS::thread_pool -KlasseBS::thread_pool -KlasseBS::this_thread -NamespaceBS::multi_future<T> KlasseBS_thread_pool_utils.hpp )BS::signaller -KlasseBS::synced_stream -KlasseBS::timer -Klasse Multithreading ist für das moderne Hochleistungs-Computing unerlässlich. Seit C ++ 11 enthält die C ++ Standardbibliothek eine integrierte Multithreading-Unterstützung mit niedrigem Niveau unter Verwendung von Konstrukten wie std::thread . std::thread erstellt jedoch jedes Mal, wenn er aufgerufen wird, einen neuen Thread, der einen erheblichen Leistungsaufwand haben kann. Darüber hinaus ist es möglich, mehr Threads zu erstellen, als die Hardware gleichzeitig verarbeiten kann, was möglicherweise zu einer erheblichen Verlangsamung führt.
Die hier vorgestellte Bibliothek enthält eine C ++ - Thread Pool -Klasse BS::thread_pool , die diese Probleme vermeidet, indem ein fester Pool von Threads ein für alle Mal erstellt und dann die gleichen Threads kontinuierlich wiederverwendet, um während der gesamten Lebensdauer des Programms unterschiedliche Aufgaben auszuführen. Standardmäßig entspricht die Anzahl der Threads im Pool der maximalen Anzahl von Threads, die die Hardware parallel ausführen kann.
Der Benutzer gibt Aufgaben ein, um in eine Warteschlange ausgeführt zu werden. Immer wenn ein Thread verfügbar ist, ruft er die nächste Aufgabe von der Warteschlange ab und führt sie aus. Der Pool erzeugt automatisch eine std::future für jede Aufgabe, mit der der Benutzer darauf warten kann, dass die Aufgabe ausgeführt wird und/oder den eventuellen Rückgaberwert gegebenenfalls zugänglich ist. Themen und Aufgaben werden vom Pool im Hintergrund autonom verwaltet, ohne dass ein Eingabetaste vom Benutzer erforderlich ist, abgesehen von der Übermittlung der gewünschten Aufgaben.
Das Design dieser Bibliothek wird von vier wichtigen Prinzipien geleitet. Erstens Kompaktheit : Die gesamte Bibliothek besteht nur aus einer in sich geschlossenen Header-Datei ohne andere Komponenten oder Abhängigkeiten, abgesehen von einer kleinen in sich geschlossenen Header-Datei mit optionalen Dienstprogrammen. Zweitens die Portabilität : Die Bibliothek verwendet nur die Standardbibliothek C ++ 17, ohne sich auf Compiler-Erweiterungen oder Drittanbieter zu verlassen, und ist daher mit einem modernen Standard-Compiler von C ++ 17 auf einer beliebigen Plattform kompatibel. Drittens ist eine Benutzerfreundlichkeit : Die Bibliothek ist ausgiebig dokumentiert, und Programmierer jeder Ebene sollten in der Lage sein, sie direkt außerhalb der Box zu verwenden.
Das vierte und endgültige Leitprinzip ist die Leistung : Jede einzelne Codezeile in dieser Bibliothek wurde sorgfältig unter Berücksichtigung der maximalen Leistung entwickelt, und die Leistung wurde auf einer Vielzahl von Compilern und Plattformen getestet und verifiziert. In der Tat war die Bibliothek ursprünglich für die rechenintensiven wissenschaftlichen Computerprojekte des Autors konzipiert, die sowohl auf High-End-Desktop-/Laptop-Computern als auch auf leistungsstarken Computerknoten ausgeführt werden.
Andere, fortschrittlichere Multithreading -Bibliotheken bieten möglicherweise mehr Funktionen und/oder eine höhere Leistung. Sie bestehen jedoch in der Regel aus einer riesigen Codebasis mit mehreren Komponenten und Abhängigkeiten und beinhalten komplexe APIs, die eine erhebliche Zeitinvestition benötigen, um zu lernen. Diese Bibliothek soll diese fortgeschritteneren Bibliotheken nicht ersetzen. Stattdessen wurde es für Benutzer entwickelt, die keine sehr fortgeschrittenen Funktionen benötigen und eine einfache und leichte Bibliothek bevorzugen, die leicht zu lernen und zu verwenden ist und leicht in bestehende oder neue Projekte aufgenommen werden kann.
#include "BS_thread_pool.hpp" und Sie sind alle festgelegt!submit_task() übermittelte Aufgabe generiert automatisch eine std::future , mit der die Aufgabe warten kann, um die Ausführung und/oder ihren eventuellen Rückgaberwert zu beenden.submit_loop() parallelisiert werden, die ein BS::multi_future zurückgibt, mit dem die Ausführung aller parallelen Aufgaben gleichzeitig verfolgt werden kann.detach_task() eingereicht werden, und Schleifen können mit detach_loop() parallelisiert werden, was die Bequemlichkeit für noch größere Leistung opfert. In diesem Fall können wait() , wait_for() und wait_until() verwendet werden, um auf alle Aufgaben in der Warteschlange zu warten.BS_thread_pool_test.cpp kann verwendet werden, um umfassende automatisierte Tests und Benchmarks durchzuführen, und dient auch als umfassendes Beispiel für die ordnungsgemäße Verwendung der Bibliothek. Das mitgelieferte PowerShell -Skript BS_thread_pool_test.ps1 bietet eine tragbare Möglichkeit, die Tests mit mehreren Compilern auszuführen.BS_thread_pool_utils.hpp enthält mehrere nützliche Nutzklassen.BS::signaller Utility -Klasse.BS::synced_stream .BS::timer Utility -Klasse einfach.detach_sequence() und submit_sequence() in die Warteschlange aufgezählt wurden.reset() .get_tasks_queued() , get_tasks_running() und get_tasks_total() Mitgliedsfunktionen.get_thread_count() .pause() , unpause() und is_paused() ; Bei der Pause holen Themen keine neuen Aufgaben aus der Warteschlange ab.purge() warten.submit_task() oder submit_loop() aus dem Haupt -Thread eingereicht wurden, über ihre Zukunft.BS::this_thread::get_index() und einen Zeiger auf den Pool, der den Thread mit BS::this_thread::get_pool() besitzt.get_thread_ids() oder den implementierende Thread-Handles mit der Option get_native_handles() MITTE-Funktion.Diese Bibliothek sollte erfolgreich auf C ++ 17 Standard-kompilierenden Compiler zusammenstellen, für alle Betriebssysteme und Architekturen, für die ein solcher Compiler verfügbar ist. Die Kompatibilität wurde mit einem 24-Kern-Intel I9-13900K CPU mit den folgenden Compilern und Plattformen überprüft:
Darüber hinaus wurde diese Bibliothek an einer digitalen Forschungs Alliance of Canada-Knoten getestet, die mit zwei 20-Kern / 40-Thread-Intel Xeon Gold 6148-CPUs (für insgesamt 40 Kerne und 80 Threads) mit CentOS Linux 7.9.2009 ausgestattet war.
Das Testprogramm BS_thread_pool_test.cpp wurde ohne Warnungen zusammengestellt (mit den Warnflaggen -Wall -Wextra -Wconversion -Wsign-conversion -Wpedantic -Weffc++ -Wshadow in GCC /Clang und /W4 in MSVC), ausgeführt und mit allen automatischen Tests und dem Gesamtwert.
Da diese Bibliothek C ++ 17 -Funktionen erfordert, muss der Code mit C ++ 17 -Unterstützung kompiliert werden:
-std=c++17 . Unter Linux müssen Sie auch das Flag -pthread verwenden, um die POSIX -Threads -Bibliothek zu aktivieren./std:c++17 und auch /permissive- um die Konformität der Standards sicherzustellen.Für die maximale Leistung wird empfohlen, mit allen verfügbaren Compiler -Optimierungen zu kompilieren:
-O3 ./O2 . Zum Beispiel wird empfohlen, um das Testprogramm BS_thread_pool_test.cpp mit Warnungen und Optimierungen zu kompilieren, die folgenden Befehle verwenden:
g++ BS_thread_pool_test.cpp -std=c++17 -O3 -Wall -Wextra -Wconversion -Wsign-conversion -Wpedantic -Weffc++ -Wshadow -pthread -o BS_thread_pool_testg++ durch clang++ .-o BS_thread_pool_test durch -o BS_thread_pool_test.exe und entfernen -pthread .cl BS_thread_pool_test.cpp /std:c++17 /permissive- /O2 /W4 /EHsc /Fo:BS_thread_pool_test.obj /Fe:BS_thread_pool_test.exe Um BS::thread_pool zu installieren, laden Sie einfach die neueste Version aus dem Github -Repository herunter, platzieren Sie die Header -Datei BS_thread_pool.hpp aus dem include in den gewünschten Ordner und in Ihr Programm einbeziehen:
# include " BS_thread_pool.hpp " Der Thread -Pool ist nun über die BS::thread_pool -Klasse zugegriffen. Für eine noch schnellere Installation können Sie die Header -Datei selbst direkt über diese URL herunterladen.
Diese Bibliothek wird außerdem mit einer unabhängigen Utilities -Header -Datei BS_thread_pool_utils.hpp geliefert, die nicht für die Verwendung des Thread -Pools erforderlich ist, sondern einige Dienstprogrammklassen enthält, die für Multithreading hilfreich sein können. Diese Header -Datei befindet sich auch im Ordner include . Es kann direkt bei dieser URL heruntergeladen werden.
Diese Bibliothek ist auch für verschiedene Paketmanager und Build -Systeme erhältlich, einschließlich VCPKG, Conan, Meson und CMake mit CPM. Weitere Informationen finden Sie weiter unten.
Der Standardkonstruktor erstellt einen Thread -Pool mit so vielen Threads, wie die Hardware gleichzeitig verarbeiten kann, wie durch die Implementierung über std::thread::hardware_concurrency() berichtet. Dies wird normalerweise durch die Anzahl der Kerne in der CPU bestimmt. Wenn ein Kern hyperthread ist, zählt er als zwei Fäden. Zum Beispiel:
// Constructs a thread pool with as many threads as available in the hardware.
BS::thread_pool pool;Optional kann eine Reihe von Threads, die sich von der Hardware -Parallelität unterscheiden, als Argument für den Konstruktor angegeben werden. Beachten Sie jedoch, dass das Hinzufügen von mehr Threads als die Hardware die Leistung nicht verbessert und dies höchstwahrscheinlich behindert. Diese Option besteht, um die Verwendung weniger Threads als die Hardware -Parallelität zu ermöglichen, in Fällen, in denen Sie einige Threads für andere Prozesse verfügbar lassen möchten. Zum Beispiel:
// Constructs a thread pool with only 12 threads.
BS::thread_pool pool ( 12 );Wenn der Thread -Pool verwendet wird, sollte der Haupt -Thread eines Programms normalerweise nur Aufgaben in den Thread -Pool einreichen und darauf warten, dass sie beendet werden, und keine rechnerisch intensiven Aufgaben selbst ausführen. In diesem Fall wird empfohlen, den Standardwert für die Anzahl der Threads zu verwenden. Dies stellt sicher, dass alle in der Hardware verfügbaren Threads zum Laufen gebracht werden, während der Hauptfaden wartet.
Die Mitgliedsfunktion get_thread_count() gibt die Anzahl der Threads im Pool zurück. Dies entspricht std::thread::hardware_concurrency() , wenn der Standardkonstruktor verwendet wurde.
Es ist im Allgemeinen unnötig, die Anzahl der Threads im Pool nach seiner Erstellung zu ändern, da der gesamte Punkt eines Threadpools darin besteht, dass Sie nur einmal die Threads erstellen. Bei Bedarf kann dies jedoch sicher und im Fliege unter Verwendung der Member-Funktion reset() erfolgen.
reset() wartet, bis alle derzeit ausgeführten Aufgaben abgeschlossen sind, aber den Rest der Aufgaben in der Warteschlange lassen. Dann zerstört es den Thread -Pool und erstellt eine neue mit der gewünschten neuen Anzahl von Threads, wie im Argument der Funktion angegeben (oder die Hardware -Parallelität, wenn kein Argument angegeben ist). Der neue Thread -Pool setzt dann die Ausführung der Aufgaben, die in der Warteschlange und alle neuen eingereichten Aufgaben verbleiben, fort.
Auf Wunsch kann die Version dieser Bibliothek während der Kompilierungszeit aus den folgenden drei Makros gelesen werden:
BS_THREAD_POOL_VERSION_MAJOR - Zeigt die Hauptversion an.BS_THREAD_POOL_VERSION_MINOR - Zeigt die kleine Version an.BS_THREAD_POOL_VERSION_PATCH - Zeigt die Patch -Version an. std::cout << " Thread pool library version is " << BS_THREAD_POOL_VERSION_MAJOR << ' . ' << BS_THREAD_POOL_VERSION_MINOR << ' . ' << BS_THREAD_POOL_VERSION_PATCH << " . n " ;Beispielausgabe:
Thread pool library version is 4.1.0.
Dies kann beispielsweise verwendet werden, um dieselbe Codebasis mit mehreren inkompatiblen Versionen der Bibliothek mit #if -Anweisungen zu ermöglichen.
Hinweis: Diese Funktion ist erst mit V4.0.1 verfügbar. Frühere Veröffentlichungen dieser Bibliothek definieren diese Makros nicht.
In diesem Abschnitt lernen wir, wie Sie eine Aufgabe ohne Argumente, aber möglicherweise mit einem Rückgabewert, an die Warteschlange einreichen. Sobald eine Aufgabe eingereicht wurde, wird sie ausgeführt, sobald ein Thread verfügbar ist. Aufgaben werden in der Reihenfolge ausgeführt, dass sie eingereicht wurden (erstes, zuerst), es sei denn, die Priorität der Aufgaben ist aktiviert (siehe unten).
Wenn beispielsweise der Pool über 8 Threads und eine leere Warteschlange verfügt und wir 16 Aufgaben eingereicht haben, sollten wir erwarten, dass die ersten 8 Aufgaben parallel ausgeführt werden, wobei die verbleibenden Aufgaben von den Threads einzeln aufgegriffen werden, während jeder Thread seine erste Aufgabe erledigt, bis keine Aufgaben in der Queue übrig sind.
Die Mitgliedsfunktion submit_task() wird verwendet, um Aufgaben an die Warteschlange zu senden. Es dauert genau eine Eingabe, die Aufgabe zum Senden. Diese Aufgabe muss eine Funktion ohne Argumente sein, kann aber einen Rückgabewert haben. Der Rückgabewert ist ein std::future der Aufgabe zugeordnet ist.
Wenn die eingereichte Funktion einen Rückgabewert vom Typ T hat, wird die Zukunft vom Typ std::future<T> sein und auf den Rückgabewert festgelegt, wenn die Funktion ihre Ausführung beendet. Wenn die eingereichte Funktion keinen Rückgabewert hat, wird die Zukunft ein std::future<void> sein, der keinen Wert zurückgibt, aber dennoch verwendet wird, um die Funktion zu warten.
Die Verwendung auto für den Rückgabewert von submit_task() bedeutet, dass der Compiler automatisch erkennt, welche Instanz der Vorlage std::future zu verwenden ist. Das Angeben des jeweiligen Typs std::future<T> , wie in den folgenden Beispielen, wird jedoch für eine erhöhte Lesbarkeit empfohlen.
Um zu warten, bis die Aufgabe beendet ist, verwenden Sie die Mitgliedsfunktion wait() der Zukunft. Um den Rückgabewert zu erhalten, verwenden Sie die Mitgliedsfunktion get() , wodurch automatisch darauf gewartet wird, dass die Aufgabe abgeschlossen ist, wenn dies noch nicht der Fall ist. Hier ist ein einfaches Beispiel:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < future > // std::future
# include < iostream > // std::cout
int the_answer ()
{
return 42 ;
}
int main ()
{
BS::thread_pool pool;
std::future< int > my_future = pool. submit_task (the_answer);
std::cout << my_future. get () << ' n ' ;
} In diesem Beispiel haben wir die Funktion the_answer() eingereicht, die ein int zurückgibt. Die Mitgliedsfunktion submit_task() des Pools gab daher eine std::future<int> zurück. Wir haben dann die get() -Mitglied -Funktion der Zukunft verwendet, um den Rückgabewert zu erhalten, und druckten sie aus.
Zusätzlich zur Einreichung einer vordefinierten Funktion können wir auch einen Lambda-Ausdruck verwenden, um die Aufgabe schnell im Fliege zu definieren. Um das vorherige Beispiel in Bezug auf einen Lambda -Ausdruck umzuschreiben, erhalten wir:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < future > // std::future
# include < iostream > // std::cout
int main ()
{
BS::thread_pool pool;
std::future< int > my_future = pool. submit_task ([]{ return 42 ; });
std::cout << my_future. get () << ' n ' ;
} Hier der Lambda -Ausdruck []{ return 42; } hat zwei Teile:
[] . Dies bedeutet dem Compiler, dass ein Lambda -Ausdruck definiert wird.{ return 42; } Das gibt einfach den Wert 42 zurück.Es ist im Allgemeinen einfacher und schneller, Lambda-Ausdrücke und nicht vordefinierte Funktionen einzureichen, insbesondere aufgrund der Fähigkeit, lokale Variablen zu erfassen, die wir im nächsten Abschnitt diskutieren werden.
Natürlich müssen Aufgaben keine Werte zurückgeben. Im folgenden Beispiel senden wir eine Funktion ohne Rückgabewert und verwenden dann die Zukunft, um darauf zu warten, dass sie ausgeführt wird:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < chrono > // std::chrono
# include < future > // std::future
# include < iostream > // std::cout
# include < thread > // std::this_thread
int main ()
{
BS::thread_pool pool;
const std::future< void > my_future = pool. submit_task (
[]
{
std::this_thread::sleep_for ( std::chrono::milliseconds ( 500 ));
});
std::cout << " Waiting for the task to complete... " ;
my_future. wait ();
std::cout << " Done. " << ' n ' ;
} Hier teilen wir die Lambda in mehrere Zeilen auf, um sie lesbarer zu machen. Der Befehl std::this_thread::sleep_for(std::chrono::milliseconds(500)) weist die Aufgabe an, einfach für 500 Millisekunden zu schlafen und eine rechenintensive Aufgabe zu simulieren.
Wie im vorherigen Abschnitt angegeben, können Aufgaben, die mit submit_task() eingereicht wurden, keine Argumente vorliegen. Es ist jedoch einfach, Aufgaben mit Argumenten einzureichen, indem sie die Funktion in einer Lambda einwickeln oder Lambda -Erfassungen direkt verwenden. Hier sind zwei Beispiele.
Das Folgende ist ein Beispiel für die Übermittlung einer vordefinierten Funktion mit Argumenten, indem Sie sie mit einem Lambda einwickeln:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < future > // std::future
# include < iostream > // std::cout
double multiply ( const double lhs, const double rhs)
{
return lhs * rhs;
}
int main ()
{
BS::thread_pool pool;
std::future< double > my_future = pool. submit_task (
[]
{
return multiply ( 6 , 7 );
});
std::cout << my_future. get () << ' n ' ;
} Wie Sie sehen können, haben wir, um die Argumente zum multiply zu bestehen, einfach multiply(6, 7) explizit in einem Lambda bezeichnet. Wenn die Argumente keine Literale sind, müssen wir die Lambda -Erfassungsklausel verwenden, um die Argumente aus dem lokalen Bereich zu erfassen:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < future > // std::future
# include < iostream > // std::cout
double multiply ( const double lhs, const double rhs)
{
return lhs * rhs;
}
int main ()
{
BS::thread_pool pool;
constexpr double first = 6 ;
constexpr double second = 7 ;
std::future< double > my_future = pool. submit_task (
[first, second]
{
return multiply (first, second);
});
std::cout << my_future. get () << ' n ' ;
} Wir könnten sogar die multiply vollständig loswerden und alles in eine Lambda legen, falls gewünscht:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < future > // std::future
# include < iostream > // std::cout
int main ()
{
BS::thread_pool pool;
constexpr double first = 6 ;
constexpr double second = 7 ;
std::future< double > my_future = pool. submit_task (
[first, second]
{
return first * second;
});
std::cout << my_future. get () << ' n ' ;
} Normalerweise ist es am besten, eine Aufgabe an die Warteschlange mit submit_task() zu senden. Auf diese Weise können Sie darauf warten, dass die Aufgabe später beendet und/oder ihren Rückgabewert erzielt. Manchmal wird jedoch keine Zukunft benötigt, beispielsweise wenn Sie nur eine bestimmte Aufgabe "festlegen und vergessen" möchten oder wenn die Aufgabe bereits mit dem Haupt -Thread oder mit anderen Aufgaben kommuniziert, ohne Futures zu verwenden, wie z. B. durch Zustandsvariablen.
In solchen Fällen möchten Sie möglicherweise den Aufwand vermeiden, der mit der Zuweisung einer Zukunft der Aufgabe verbunden ist, um die Leistung zu erhöhen. Dies nennt man die Aufgabe als "Ablösung", da sich die Aufgabe vom Haupt -Thread löst und unabhängig ausgeführt wird.
Das Abnehmen von Aufgaben erfolgt mit der Member -Funktion detach_task() , mit der Sie eine Aufgabe in die Warteschlange abnehmen können, ohne eine Zukunft dafür zu generieren. Die Aufgabe kann eine beliebige Anzahl von Argumenten haben, kann jedoch keinen Rückgabewert haben, da der Haupt -Thread keine Möglichkeit gibt, diesen Wert abzurufen.
Da detach_task() keine Zukunft zurückgibt, gibt es für den Benutzer keine integrierte Möglichkeit, zu wissen, wann die Aufgabe ausgeführt wird. Sie müssen manuell sicherstellen, dass die Aufgabe ausgeführt wird, bevor Sie versuchen, etwas zu verwenden, das von ihrer Ausgabe abhängt. Ansonsten werden schlechte Dinge passieren!
BS::thread_pool bietet die Mitgliederfunktion wait() , um das Warten auf alle Aufgaben in der Warteschlange zu ermöglichen, unabhängig davon, ob sie abgelöst oder mit einer Zukunft eingereicht wurden. Die wait() -Mitglied -Funktion funktioniert ähnlich wie bei der wait() -Mitgliedsfunktion von std::future . Betrachten Sie beispielsweise den folgenden Code:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < chrono > // std::chrono
# include < iostream > // std::cout
# include < thread > // std::this_thread
int main ()
{
BS::thread_pool pool;
int result = 0 ;
pool. detach_task (
[&result]
{
std::this_thread::sleep_for ( std::chrono::milliseconds ( 100 ));
result = 42 ;
});
std::cout << result << ' n ' ;
} Dieses Programm definiert zunächst eine lokale Variable mit dem Namen result und initialisiert es auf 0 . Anschließend löst es eine Aufgabe in Form eines Lambda -Ausdrucks. Beachten Sie, dass das Lambda result durch Referenz erfasst, wie durch das & davor angegeben. Dies bedeutet, dass die Aufgabe result ändern kann, und eine solche Änderung wird im Hauptfaden reflektiert. Die Aufgabe ändert result auf 42 , aber sie schläft zuerst für 100 Millisekunden. Wenn der Haupt -Thread den Wert des result ausdruckt, hatte die Aufgabe noch keine Zeit, seinen Wert zu ändern, da sie noch schläft. Daher druckt das Programm den Anfangswert 0 aus.
Um auf die Aufgabe zu warten, müssen wir die wait() -Mitglied -Funktion verwenden, nachdem wir sie abgelöst haben:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < chrono > // std::chrono
# include < iostream > // std::cout
# include < thread > // std::this_thread
int main ()
{
BS::thread_pool pool;
int result = 0 ;
pool. detach_task (
[&result]
{
std::this_thread::sleep_for ( std::chrono::milliseconds ( 100 ));
result = 42 ;
});
pool. wait ();
std::cout << result << ' n ' ;
} Jetzt druckt das Programm wie erwartet den Wert 42 aus. Beachten Sie jedoch, dass wait() auf alle Aufgaben in der Warteschlange wartet, einschließlich anderer Aufgaben, die möglicherweise vor oder nach dem, das uns wichtig ist, eingereicht wurden. Wenn wir auf nur eine Aufgabe warten möchten, wäre submit_task() eine bessere Wahl.
Manchmal möchten Sie möglicherweise darauf warten, dass die Aufgaben erledigt werden, jedoch nur für eine bestimmte Zeit oder bis zu einem bestimmten Zeitpunkt. Wenn die Aufgaben beispielsweise nach einiger Zeit noch nicht erledigt sind, möchten Sie den Benutzer möglicherweise wissen lassen, dass es eine Verzögerung gibt.
Für Aufgaben, die mit Futures mit submit_task() eingereicht wurden, kann dies mit zwei Mitgliedsfunktionen von std::future erreicht werden:
wait_for() wartet, bis die Aufgabe erledigt ist, hört jedoch auf, nach der angegebenen Dauer zu warten, die als Argument vom Typ std::chrono::duration gegeben ist.wait_until() wartet, bis die Aufgabe erledigt ist, hört jedoch auf, nach dem angegebenen Zeitpunkt zu warten, das als Argument vom Typ std::chrono::time_point angegeben ist. In beiden Fällen werden die Funktionen future_status::ready zurückgeben, wenn die Zukunft fertig ist, was bedeutet, dass die Aufgabe abgeschlossen ist und ihr Rückgabewert, falls vorhanden, erhalten wurde. Es wird jedoch std::future_status::timeout zurückgegeben, wenn die Zukunft noch nicht fertig ist, wenn die Zeitüberschreitung abgelaufen ist.
Hier ist ein Beispiel:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < chrono > // std::chrono
# include < future > // std::future
# include < iostream > // std::cout
# include < thread > // std::this_thread
int main ()
{
BS::thread_pool pool;
const std::future< void > my_future = pool. submit_task (
[]
{
std::this_thread::sleep_for ( std::chrono::milliseconds ( 1000 ));
std::cout << " Task done! n " ;
});
while ( true )
{
if (my_future. wait_for ( std::chrono::milliseconds ( 200 )) != std::future_status::ready)
std::cout << " Sorry, the task is not done yet. n " ;
else
break ;
}
}Die Ausgabe sollte ähnlich aussehen:
Sorry, the task is not done yet.
Sorry, the task is not done yet.
Sorry, the task is not done yet.
Sorry, the task is not done yet.
Task done!
Für abgelöste Aufgaben können wir diese Methode nicht anwenden, da wir keine Zukunft für sie haben. BS::thread_pool hat jedoch zwei Mitgliedsfunktionen, auch mit dem Namen wait_for() und wait_until() , die in ähnlicher Weise auf eine bestimmte Dauer oder bis zu einem bestimmten Zeitpunkt warten, dies jedoch für alle Aufgaben (ob eingereicht oder abgelöst). Anstelle eines std::future_status gibt die Wartefunktionen des Thread -Pools true , wenn alle Aufgaben ausgeführt wurden, oder false , wenn die Dauer abgelaufen ist oder der Zeitpunkt erreicht wurde, aber noch einige Aufgaben ausgeführt werden.
Hier ist das gleiche Beispiel wie oben unter Verwendung von detach_task() und pool.wait_for() :
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < chrono > // std::chrono
# include < iostream > // std::cout
# include < thread > // std::this_thread
int main ()
{
BS::thread_pool pool;
pool. detach_task (
[]
{
std::this_thread::sleep_for ( std::chrono::milliseconds ( 1000 ));
std::cout << " Task done! n " ;
});
while ( true )
{
if (!pool. wait_for ( std::chrono::milliseconds ( 200 )))
std::cout << " Sorry, the task is not done yet. n " ;
else
break ;
}
}Betrachten wir das folgende Programm:
# include < iostream > // std::cout, std::boolalpha
class flag_class
{
public:
[[nodiscard]] bool get_flag () const
{
return flag;
}
void set_flag ( const bool arg)
{
flag = arg;
}
private:
bool flag = false ;
};
int main ()
{
flag_class flag_object;
flag_object. set_flag ( true );
std::cout << std::boolalpha << flag_object. get_flag () << ' n ' ;
} In diesem Programm wird ein neues Objekt flag_object der Klassen flag_class erstellt, das Flag mit der Setzer -Mitgliedsfunktion set_flag() auf true legt und dann den Wert des Flags mit der Getter -Mitgliedsfunktion get_flag() ausdruiert.
Was ist, wenn wir die Mitgliedsfunktion set_flag() als Aufgabe an den Thread -Pool senden möchten? Wir wickeln einfach die gesamte Anweisung flag_object.set_flag(true); Aus Zeile in einem Lambda und übergeben Sie flag_object an die Lambda durch Bezugnahme, wie in diesem Beispiel:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < iostream > // std::cout, std::boolalpha
class flag_class
{
public:
[[nodiscard]] bool get_flag () const
{
return flag;
}
void set_flag ( const bool arg)
{
flag = arg;
}
private:
bool flag = false ;
};
int main ()
{
BS::thread_pool pool;
flag_class flag_object;
pool. submit_task (
[&flag_object]
{
flag_object. set_flag ( true );
})
. wait ();
std::cout << std::boolalpha << flag_object. get_flag () << ' n ' ;
} Dies funktioniert natürlich auch mit detach_task() , wenn wir wait() auf dem Pool selbst anstatt in der zurückgegebenen Zukunft anrufen.
Beachten Sie, dass wir in diesem Beispiel anstatt eine Zukunft von submit_task() zu bekommen und dann auf diese Zukunft zu warten, einfach auf diese Zukunft wait() auf diese Zukunft angerufen haben. Dies ist eine häufige Art, auf eine Aufgabe zu warten, wenn wir in der Zwischenzeit nichts anderes zu tun haben. Beachten Sie auch, dass wir flag_object durch Bezugnahme auf die Lambda übergeben haben, da wir das Flag auf demselben Objekt festlegen möchten, nicht auf eine Kopie davon (das Übergeben von Wert hätte ohnehin nicht funktioniert, da Variablen, die von Wert erfasst wurden, implizit const sind).
Eine andere Sache, die Sie vielleicht tun möchten, ist, eine Mitgliedsfunktion innerhalb des Objekts selbst aufzurufen, dh von einer anderen Mitgliedsfunktion. Dies folgt einer ähnlichen Syntax, außer dass Sie this auch (dh ein Zeiger auf das aktuelle Objekt) in der Lambda erfassen müssen. Hier ist ein Beispiel:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < iostream > // std::cout, std::boolalpha
BS::thread_pool pool;
class flag_class
{
public:
[[nodiscard]] bool get_flag () const
{
return flag;
}
void set_flag ( const bool arg)
{
flag = arg;
}
void set_flag_to_true ()
{
pool. submit_task (
[ this ]
{
set_flag ( true );
})
. wait ();
}
private:
bool flag = false ;
};
int main ()
{
flag_class flag_object;
flag_object. set_flag_to_true ();
std::cout << std::boolalpha << flag_object. get_flag () << ' n ' ;
} Beachten Sie, dass wir in diesem Beispiel den Thread -Pool als globales Objekt definiert haben, damit er außerhalb der Funktion main() zugänglich ist.
Eine der häufigsten und wirksamsten Methoden der Parallelisierung ist die Aufteilung einer Schleife in kleinere Schleifen und das parallele Ausführen. Es ist am effektivsten in "peinlich parallelen" Berechnungen wie Vektor- oder Matrixoperationen, bei denen jede Iteration der Schleife völlig unabhängig von jeder anderen Iteration ist.
Wenn wir beispielsweise zwei Vektoren von jeweils 1000 Elementen zusammenfassen und 10 Threads haben, können wir die Summierung in jeweils 10 Blöcke von jeweils 100 Elementen aufteilen und alle Blöcke parallel ausführen, wodurch die Leistung möglicherweise um bis zu einem Faktor 10 erhöht wird.
BS::thread_pool kann automatisch Schleifen parallelisieren. Um zu sehen, wie dies funktioniert, betrachten Sie die folgende generische Schleife:
for (T i = start; i < end; ++i)
loop (i);Wo:
T ist ein signierter oder nicht signierter Ganzzahl -Typ.[start, end) , dh ein start , aber ausschließlich des end .loop() ist eine Operation, die für jeden Loop -Index i durchgeführt wird, z. B. das Ändern eines Arrays mit end - start . Diese Schleife kann automatisch parallelisiert und an die Warteschlange des Thread -Pools über die Mitgliedsfunktion submit_loop() übermittelt werden, die die folgende Syntax enthält:
pool.submit_loop(start, end, loop, num_blocks);Wo:
start ist der erste Index im Bereich.end ist der Index nach dem letzten Index im Bereich, so dass der gesamte Bereich [start, end) ist. Mit anderen Worten, die Schleife entspricht der oben genannten, wenn start und end gleich sind.start und end müssen beide vom gleichen ganzzahligen Typ T haben. Beispiele finden Sie unten, was zu tun ist, wenn sie nicht vom gleichen Typ sind.end <= start nichts passieren wird.loop() ist die Funktion, die in jeder Iteration der Schleife ausgeführt wird und ein Argument, den Loop -Index, nimmt.num_blocks ist die Anzahl der Blöcke der Form [a, b) um die Schleife aufzuteilen. Wenn der Bereich beispielsweise [0, 9) und 3 Blöcke vorliegt, sind die Blöcke die Bereiche [0, 3) , [3, 6) und [6, 9) .[0, 100) in 15 Blöcke aufgeteilt wird, beträgt das Ergebnis 10 Blöcke der Größe 7, die zuerst ausgeführt werden, und 5 Blöcke der Größe 6.Jeder Block wird als separate Aufgabe an die Warteschlange des Thread Pools übermittelt. Daher wird eine Schleife, die in 3 Blöcke aufgeteilt wird, in 3 einzelne Aufgaben aufgeteilt, die parallel laufen können. Wenn es nur einen Block gibt, wird die gesamte Schleife als eine Aufgabe ausgeführt, und es findet keine Parallelisierung statt.
Um die oben generische Schleife zu parallelisieren, verwenden wir die folgenden Befehle:
BS::multi_future< void > loop_future = pool.submit_loop(start, end, loop, num_blocks);
loop_future.wait(); submit_loop() gibt ein Objekt der Helfer -Klasse -Vorlage BS::multi_future zurück. Dies ist im Wesentlichen eine Spezialisierung von std::vector<std::future<T>> mit zusätzlichen Mitgliedsfunktionen. Jede der Blöcke num_blocks wird eine std::future zugewiesen, und all diese Zukunft werden in dem zurückgegebenen BS::multi_future gespeichert. Wenn loop_future.wait() aufgerufen wird, wartet der Haupt -Thread, bis alle Aufgaben von submit_loop() ausgeführt werden und nur diese Aufgaben - keine anderen Aufgaben, die sich auch in der Warteschlange befinden. Dies ist im Wesentlichen die Rolle der BS::multi_future -Klasse: Warten auf eine bestimmte Gruppe von Aufgaben , in diesem Fall die Aufgaben, die die Schleifenblöcke ausführen.
Welchen Wert sollten Sie für num_blocks verwenden? Das Auslassen dieses Arguments, so dass die Anzahl der Blöcke der Anzahl der Threads im Pool entspricht, ist normalerweise eine gute Wahl. Für die beste Leistung wird empfohlen, Ihre eigenen Benchmarks zu machen, um die optimale Anzahl von Blöcken für jede Schleife zu finden (Sie können die BS::timer -Dienstprogrammklasse verwenden). Die Verwendung weniger Aufgaben als Threads kann bevorzugt werden, wenn Sie auch andere Aufgaben parallel ausführen. Die Verwendung von mehr Aufgaben als es gibt Threads kann in einigen Fällen die Leistung verbessern, aber die Parallelisierung mit zu vielen Aufgaben leiden unter abnehmenden Renditen.
Als einfaches Beispiel berechnet und druckt der folgende Code eine Tabelle mit Quadraten aller Ganzzahlen von 0 bis 99:
# include < iomanip > // std::setw
# include < iostream > // std::cout
int main ()
{
constexpr unsigned int max = 100 ;
unsigned int squares[max];
for ( unsigned int i = 0 ; i < max; ++i)
squares[i] = i * i;
for ( unsigned int i = 0 ; i < max; ++i)
std::cout << std::setw ( 2 ) << i << " ^2 = " << std::setw ( 4 ) << squares[i] << ((i % 5 != 4 ) ? " | " : " n " );
}Wir können es wie folgt parallelisieren:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < iomanip > // std::setw
# include < iostream > // std::cout
int main ()
{
BS::thread_pool pool ( 10 );
constexpr unsigned int max = 100 ;
unsigned int squares[max];
const BS::multi_future< void > loop_future = pool. submit_loop < unsigned int >( 0 , max,
[&squares]( const unsigned int i)
{
squares[i] = i * i;
});
loop_future. wait ();
for ( unsigned int i = 0 ; i < max; ++i)
std::cout << std::setw ( 2 ) << i << " ^2 = " << std::setw ( 4 ) << squares[i] << ((i % 5 != 4 ) ? " | " : " n " );
} Da es 10 Threads gibt und wir das Argument num_blocks weggelassen haben, wird die Schleife in 10 Blöcke unterteilt, wobei jeweils 10 Quadrate berechnet werden.
Beachten Sie, dass submit_loop() mit dem explizite Vorlagenparameter <unsigned int> ausgeführt wurde. Der Grund dafür ist, dass die beiden Schleifenindizes vom gleichen Typ sein müssen. Hier ist max jedoch ein unsigned int , während 0 ein (signiertes) int ist, sodass die Typen nicht übereinstimmen, und der Code wird nicht kompiliert, es sei denn, wir zwingen die 0 um vom richtigen Typ zu sein. Dies kann am elegantesten durchgeführt werden, indem der Typ der Indizes explizit mit dem Vorlageparameter angegeben wird.
Der Grund, warum dies nicht automatisch erfolgt (z. B. unter Verwendung von std::common_type , besteht darin, dass es dazu führen kann, dass es versehentlich negative Indizes auf einen nicht signierten Typ oder eine ganzzahlige Indizes auf einen zu engen Ganzzahltyp übertragen kann, was zu einem falschen Schleifenbereich führen kann.
Wir könnten die 0 auch explizit auf nicht signiertes int werfen, aber das sieht nicht so gut aus:
pool.submit_loop( static_cast < unsigned int >( 0 ), max, /* ... */ );Oder wir könnten eine Besetzung im C-Stil verwenden:
pool.submit_loop(( unsigned int )( 0 ), max, /* ... */ );Oder wir könnten ein ganzzahliges wörtliches Suffix verwenden:
pool.submit_loop< size_t >( 0U , max, ...);Beachten Sie zur Randnotiz, dass wir hier die Berechnung der Quadrate parallelisiert haben, aber wir haben das Drucken der Ergebnisse nicht parallelisiert. Dies ist aus zwei Gründen:
Genau wie im Fall von detach_task() vs. submit_task() möchten Sie manchmal eine Schleife parallelisieren, aber Sie müssen nicht ein BS::multi_future zurückgeben. In diesem Fall können Sie den Overhead der Erzeugung der Futures (die je nach Anzahl der Blöcke von erheblichem Erzeugungsgenerieren) speichern können, indem Sie detach_loop() anstelle von submit_loop() mit denselben Argumenten verwenden.
Zum Beispiel konnten wir die obige Schleife der Quadrate wie folgt abnehmen:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < iomanip > // std::setw
# include < iostream > // std::cout
int main ()
{
BS::thread_pool pool ( 10 );
constexpr unsigned int max = 100 ;
unsigned int squares[max];
pool. detach_loop < unsigned int >( 0 , max,
[&squares]( const unsigned int i)
{
squares[i] = i * i;
});
pool. wait ();
for ( unsigned int i = 0 ; i < max; ++i)
std::cout << std::setw ( 2 ) << i << " ^2 = " << std::setw ( 4 ) << squares[i] << ((i % 5 != 4 ) ? " | " : " n " );
} Warning: Since detach_loop() does not return a BS::multi_future , there is no built-in way for the user to know when the loop finishes executing. You must use either wait() as we did here, or some other method such as condition variables, to ensure that the loop finishes executing before trying to use anything that depends on its output. Otherwise, bad things will happen!
We have seen that detach_loop() and submit_loop() execute the function loop(i) for each index i in the loop. However, behind the scenes, the loop is split into blocks, and each block executes the loop() function multiple times. Each block has an internal loop of the form (where T is the type of the indices):
for (T i = start; i < end; ++i)
loop (i); The start and end indices of each block are determined automatically by the pool. For example, in the previous section, the loop from 0 to 100 was split into 10 blocks of 10 indices each: start = 0 to end = 10 , start = 10 to end = 20 , and so on; the blocks are not inclusive of the last index, since the for loop has the condition i < end and not i <= end .
However, this also means that the loop() function is executed multiple times per block. This generates additional overhead due to the multiple function calls. For short loops, this should not affect performance. However, for very long loops, with millions of indices, the performance cost may be significate.
For this reason, the thread pool library provides two additional member functions for parallelizing loops: detach_blocks() and submit_blocks() . While detach_loop() and submit_loop() execute a function loop(i) once per index but multiple times per block, detach_blocks() and submit_blocks() execute a function block(start, end) once per block.
The main advantage of this method is increased performance, but the main disadvantage is slightly more complicated code. In particular, the user must define the loop from start to end manually within each block. Here is the previous example using detach_blocks() :
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < iomanip > // std::setw
# include < iostream > // std::cout
int main ()
{
BS::thread_pool pool ( 10 );
constexpr unsigned int max = 100 ;
unsigned int squares[max];
pool. detach_blocks < unsigned int >( 0 , max,
[&squares]( const unsigned int start, const unsigned int end)
{
for ( unsigned int i = start; i < end; ++i)
squares[i] = i * i;
});
pool. wait ();
for ( unsigned int i = 0 ; i < max; ++i)
std::cout << std::setw ( 2 ) << i << " ^2 = " << std::setw ( 4 ) << squares[i] << ((i % 5 != 4 ) ? " | " : " n " );
}Note how the block function takes two arguments, and includes the internal loop.
Generally, compiler optimizations should be able to make detach_loop() and submit_loop() perform roughly the same as detach_blocks() and submit_blocks() . However, you should perform your own benchmarks to see which option works best for your particular use case.
Unlike submit_task() , the member function submit_loop() only takes loop functions with no return values. The reason is that it wouldn't make sense to return a future for every single index of the loop. However, submit_blocks() does allow the block function to have a return value, as the number of blocks will generally not be too large, unlike the number of indices.
The block function will be executed once for each block, but the blocks are managed by the thread pool, with the user only able to select the number of blocks, but not the range of each block. Therefore, there is limited usability in returning one value per block. However, for cases where this is desired, such as for summation or some sorting algorithms, submit_blocks() does accept functions with return values, in which case it returns a BS::multi_future<T> object where T is the type of the return values.
Here's an example of a function template summing all elements of type T in a given range:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < cstdint > // std::uint64_t
# include < future > // std::future
# include < iostream > // std::cout
BS::thread_pool pool;
template < typename T>
T sum (T min, T max)
{
BS::multi_future<T> loop_future = pool. submit_blocks <T>(
min, max + 1 ,
[]( const T start, const T end)
{
T block_total = 0 ;
for (T i = start; i < end; ++i)
block_total += i;
return block_total;
},
100 );
T result = 0 ;
for (std::future<T>& future : loop_future)
result += future. get ();
return result;
}
int main ()
{
std::cout << sum<std:: uint64_t >( 1 , 1'000'000 );
} Here we used the fact that BS::multi_future<T> is a specialization of std::vector<std::future<T>> , so we can use a range-based for loop to iterate over the futures, and use the get() member function of each future to get its value. The values of the futures will be the partial sums from each block, so when we add them up, we will get the total sum. Note that we divided the loop into 100 blocks, so there will be 100 futures in total, each with the partial sum of 10,000 numbers.
The range-based for loop will likely start before the loop finished executing, and each time it calls a future, it will get the value of that future if it is ready, or it will wait until the future is ready and then get the value. This increases performance, since we can start summing the results without waiting for the entire loop to finish executing first - we only need to wait for individual blocks.
If we did want to wait until the entire loop finishes before summing the results, we could have used the get() member function of the BS::multi_future<T> object itself, which returns an std::vector<T> with the values obtained from each future. In that case, the sum could be obtained after calling submit_blocks() as follows:
std::vector<T> partial_sums = loop_future.get();
T result = std::reduce(partial_sums.begin(), partial_sums.end());
return result; The member functions detach_loop() , submit_loop() , detach_blocks() , and submit_blocks() parallelize a loop by splitting it into blocks, and submitting each block as an individual task to the queue, with each such task iterating over all the indices in the corresponding block's range, which can be numerous. However, sometimes we have loops with few indices, or more generally, a sequence of tasks enumerated by some index. In such cases, we can avoid the overhead of splitting into blocks and simply submit each individual index as its own independent task to the pool's queue.
This can be done with detach_sequence() and submit_sequence() . The syntax of these functions is similar to detach_loop() and submit_loop() , except that they don't have the num_blocks argument at the end. The sequence function must take only one argument, the index. As usual, detach_sequence() detaches the tasks and does not return a future, while submit_sequence() returns a BS::multi_future . If the tasks in the sequence return values, then the futures will contain those values, otherwise they will be void futures.
Here is a simple example:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < cstdint > // std::uint64_t
# include < iostream > // std::cout
# include < vector > // std::vector
using ui64 = std:: uint64_t ;
ui64 factorial ( const ui64 n)
{
ui64 result = 1 ;
for (ui64 i = 2 ; i <= n; ++i)
result *= i;
return result;
}
int main ()
{
BS::thread_pool pool;
constexpr ui64 max = 20 ;
BS::multi_future<ui64> sequence_future = pool. submit_sequence <ui64>( 0 , max + 1 , factorial);
std::vector<ui64> factorials = sequence_future. get ();
for (ui64 i = 0 ; i < max + 1 ; ++i)
std::cout << i << " ! = " << factorials[i] << ' n ' ;
}BS::multi_future<T> The helper class template BS::multi_future<T> , which we have been using throughout this section, provides a convenient way to collect and access groups of futures. This class is a specialization of std::vector<T> , so it should be used in a similar way:
[] operator to access the future at a specific index, or the push_back() member function to append a new future to the list.size() member function tells you how many futures are currently stored in the object. However, BS::multi_future<T> also has additional member functions that are aimed specifically at handling futures:
wait() to wait for all of them at once or get() to get an std::vector<T> with the results from all of them.ready_count() .valid() .wait_for() or wait until a specific time with wait_until() . These functions return true if all futures have been waited for before the duration expired or the time point was reached, and false otherwise. Aside from using BS::multi_future<T> to track the execution of parallelized loops, it can also be used, for example, whenever you have several different groups of tasks and you want to track the execution of each group individually.
The optional header file BS_thread_pool_utils.hpp contains several useful utility classes. These are not necessary for using the thread pool itself; BS_thread_pool.hpp is the only header file required. However, the utility classes can make writing multithreading code more convenient.
As with the main header file, the version of the utilities header file can be found by checking three macros:
BS_THREAD_POOL_UTILS_VERSION_MAJOR - indicates the major version.BS_THREAD_POOL_UTILS_VERSION_MINOR - indicates the minor version.BS_THREAD_POOL_UTILS_VERSION_PATCH - indicates the patch version.BS::synced_streamWhen printing to an output stream from multiple threads in parallel, the output may become garbled. For example, consider this code:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < iostream > // std::cout
BS::thread_pool pool;
int main ()
{
pool. detach_sequence ( 0 , 5 ,
[]( int i)
{
std::cout << " Task no. " << i << " executing. n " ;
});
}The output will be a mess similar to this:
Task no. Task no. Task no. 3 executing.
0 executing.
Task no. 41 executing.
Task no. 2 executing.
executing.
The reason is that, although each individual insertion to std::cout is thread-safe, there is no mechanism in place to ensure subsequent insertions from the same thread are printed contiguously.
The utility class BS::synced_stream is designed to eliminate such synchronization issues. The constructor takes one optional argument, specifying the output stream to print to. If no argument is supplied, std::cout will be used:
// Construct a synced stream that will print to std::cout.
BS::synced_stream sync_out;
// Construct a synced stream that will print to the output stream my_stream.
BS::synced_stream sync_out (my_stream); The member function print() takes an arbitrary number of arguments, which are inserted into the stream one by one, in the order they were given. println() does the same, but also prints a newline character n at the end, for convenience. A mutex is used to synchronize this process, so that any other calls to print() or println() using the same BS::synced_stream object must wait until the previous call has finished.
As an example, this code:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include " BS_thread_pool_utils.hpp " // BS::synced_stream
BS::synced_stream sync_out;
BS::thread_pool pool;
int main ()
{
pool. detach_sequence ( 0 , 5 ,
[]( int i)
{
sync_out. println ( " Task no. " , i, " executing. " );
});
}Will print out:
Task no. 0 executing.
Task no. 1 executing.
Task no. 2 executing.
Task no. 3 executing.
Task no. 4 executing.
Warning: Always create the BS::synced_stream object before the BS::thread_pool object, as we did in this example. When the BS::thread_pool object goes out of scope, it waits for the remaining tasks to be executed. If the BS::synced_stream object goes out of scope before the BS::thread_pool object, then any tasks using the BS::synced_stream will crash. Since objects are destructed in the opposite order of construction, creating the BS::synced_stream object before the BS::thread_pool object ensures that the BS::synced_stream is always available to the tasks, even while the pool is destructing.
Most stream manipulators defined in the headers <ios> and <iomanip> , such as std::setw (set the character width of the next output), std::setprecision (set the precision of floating point numbers), and std::fixed (display floating point numbers with a fixed number of digits), can be passed to print() and println() just as you would pass them to a stream.
The only exceptions are the flushing manipulators std::endl and std::flush , which will not work because the compiler will not be able to figure out which template specializations to use. Instead, use BS::synced_stream::endl and BS::synced_stream::flush . Here is an example:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include " BS_thread_pool_utils.hpp " // BS::synced_stream
# include < cmath > // std::sqrt
# include < iomanip > // std::setprecision, std::setw
# include < ios > // std::fixed
BS::synced_stream sync_out;
BS::thread_pool pool;
int main ()
{
sync_out. print ( std::setprecision ( 10 ), std::fixed);
pool. detach_sequence ( 0 , 16 ,
[]( int i)
{
sync_out. print ( " The square root of " , std::setw ( 2 ), i, " is " , std::sqrt (i), " . " , BS::synced_stream::endl);
});
} Note, however, that BS::synced_stream::endl should only be used if flushing is desired; otherwise, a newline character should be used instead.
BS::timerIf you are using a thread pool, then your code is most likely performance-critical. Achieving maximum performance requires performing a considerable amount of benchmarking to determine the optimal settings and algorithms. Therefore, it is important to be able to measure the execution time of various computations and operations under different conditions.
The utility class BS::timer provides a simple way to measure execution time. It is very straightforward to use:
BS::timer object.start() member function.stop() member function.ms() to obtain the elapsed time for the computation in milliseconds.current_ms() to obtain the elapsed time so far but keep the timer ticking.Zum Beispiel:
BS::timer tmr;
tmr.start();
do_something ();
tmr.stop();
std::cout << " The elapsed time was " << tmr.ms() << " ms. n " ; A practical application of the BS::timer class can be found in the benchmark portion of the test program BS_thread_pool_test.cpp .
BS::signaller BS::signaller is a utility class which can be used to allow simple signalling between threads. To use it, construct an object and then pass it to the different threads. Multiple threads can call the wait() member function of the signaller. When another thread calls the ready() member function, the waiting threads will stop waiting.
That's really all there is to it; BS::signaller is really just a convenient wrapper around std::promise , which contains both the promise and its future. For usage examples, please see the test program BS_thread_pool_test.cpp .
Sometimes you may wish to monitor what is happening with the tasks you submitted to the pool. This may be done using three member functions:
get_tasks_queued() gets the number of tasks currently waiting in the queue to be executed by the threads.get_tasks_running() gets the number of tasks currently being executed by the threads.get_tasks_total() gets the total number of unfinished tasks: either still in the queue, or running in a thread.get_tasks_total() == get_tasks_queued() + get_tasks_running() .These functions are demonstrated in the following program:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include " BS_thread_pool_utils.hpp " // BS::synced_stream
# include < chrono > // std::chrono
# include < thread > // std::this_thread
BS::synced_stream sync_out;
BS::thread_pool pool ( 4 );
void sleep_half_second ( const int i)
{
std::this_thread::sleep_for ( std::chrono::milliseconds ( 500 ));
sync_out. println ( " Task " , i, " done. " );
}
void monitor_tasks ()
{
sync_out. println (pool. get_tasks_total (), " tasks total, " , pool. get_tasks_running (), " tasks running, " , pool. get_tasks_queued (), " tasks queued. " );
}
int main ()
{
pool. wait ();
pool. detach_sequence ( 0 , 12 , sleep_half_second);
monitor_tasks ();
std::this_thread::sleep_for ( std::chrono::milliseconds ( 750 ));
monitor_tasks ();
std::this_thread::sleep_for ( std::chrono::milliseconds ( 500 ));
monitor_tasks ();
std::this_thread::sleep_for ( std::chrono::milliseconds ( 500 ));
monitor_tasks ();
}Assuming you have at least 4 hardware threads (so that 4 tasks can run concurrently), the output should be similar to:
12 tasks total, 0 tasks running, 12 tasks queued.
Task 0 done.
Task 1 done.
Task 2 done.
Task 3 done.
8 tasks total, 4 tasks running, 4 tasks queued.
Task 4 done.
Task 5 done.
Task 6 done.
Task 7 done.
4 tasks total, 4 tasks running, 0 tasks queued.
Task 8 done.
Task 9 done.
Task 10 done.
Task 11 done.
0 tasks total, 0 tasks running, 0 tasks queued.
The reason we called pool.wait() in the beginning is that when the thread pool is created, an initialization task runs in each thread, so if we don't wait, the first line will say there are 16 tasks in total, including the 4 initialization tasks. Weitere Informationen finden Sie weiter unten.
Consider a situation where the user cancels a multithreaded operation while it is still ongoing. Perhaps the operation was split into multiple tasks, and half of the tasks are currently being executed by the pool's threads, but the other half are still waiting in the queue.
The thread pool cannot terminate the tasks that are already running, as the C++17 standard does not provide that functionality (and in any case, abruptly terminating a task while it's running could have extremely bad consequences, such as memory leaks and data corruption). However, the tasks that are still waiting in the queue can be purged using the purge() member function.
Once purge() is called, any tasks still waiting in the queue will be discarded, and will never be executed by the threads. Please note that there is no way to restore the purged tasks; they are gone forever.
Consider for example the following program:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include " BS_thread_pool_utils.hpp " // BS::synced_stream
# include < chrono > // std::chrono
# include < thread > // std::this_thread
BS::synced_stream sync_out;
BS::thread_pool pool ( 4 );
int main ()
{
for ( size_t i = 0 ; i < 8 ; ++i)
{
pool. detach_task (
[i]
{
std::this_thread::sleep_for ( std::chrono::milliseconds ( 100 ));
sync_out. println ( " Task " , i, " done. " );
});
}
std::this_thread::sleep_for ( std::chrono::milliseconds ( 50 ));
pool. purge ();
pool. wait ();
} The program submit 8 tasks to the queue. Each task waits 100 milliseconds and then prints a message. The thread pool has 4 threads, so it will execute the first 4 tasks in parallel, and then the remaining 4. We wait 50 milliseconds, to ensure that the first 4 tasks have all started running. Then we call purge() to purge the remaining 4 tasks. As a result, these tasks never get executed. However, since the first 4 tasks are still running when purge() is called, they will finish uninterrupted; purge() only discards tasks that have not yet started running. The output of the program therefore only contains the messages from the first 4 tasks:
Task 0 done.
Task 1 done.
Task 2 done.
Task 3 done.
submit_task() catches any exceptions thrown by the submitted task and forwards them to the corresponding future. They can then be caught when invoking the get() member function of the future. Zum Beispiel:
# include " BS_thread_pool.hpp "
BS::synced_stream sync_out;
BS::thread_pool pool;
double inverse ( const double x)
{
if (x == 0 )
throw std::runtime_error ( " Division by zero! " );
else
return 1 / x;
}
int main ()
{
constexpr double num = 0 ;
std::future< double > my_future = pool. submit_task (inverse, num);
try
{
const double result = my_future. get ();
sync_out. println ( " The inverse of " , num, " is " , result, " . " );
}
catch ( const std:: exception & e)
{
sync_out. println ( " Caught exception: " , e. what ());
}
}The output will be:
Caught exception: Division by zero!
However, if you change num to any non-zero number, no exceptions will be thrown and the inverse will be printed.
It is important to note that wait() does not throw any exceptions; only get() does. Therefore, even if your task does not return anything, ie your future is an std::future<void> , you must still use get() on the future obtained from it if you want to catch exceptions thrown by it. Here is an example:
# include " BS_thread_pool.hpp "
BS::synced_stream sync_out;
BS::thread_pool pool;
void print_inverse ( const double x)
{
if (x == 0 )
throw std::runtime_error ( " Division by zero! " );
else
sync_out. println ( " The inverse of " , x, " is " , 1 / x, " . " );
}
int main ()
{
constexpr double num = 0 ;
std::future< void > my_future = pool. submit_task (print_inverse, num);
try
{
my_future. get ();
}
catch ( const std:: exception & e)
{
sync_out. println ( " Caught exception: " , e. what ());
}
} When using BS::multi_future to handle multiple futures at once, exception handling works the same way: if any of the futures may throw exceptions, you may catch these exceptions when calling get() , even in the case of BS::multi_future<void> .
If you do not require exception handling, or if exceptions are explicitly disabled in your codebase, you can define the macro BS_THREAD_POOL_DISABLE_EXCEPTION_HANDLING before including BS_thread_pool.hpp , which will disable exception handling in submit_task() . Note that if the feature-test macro __cpp_exceptions is undefined, BS_THREAD_POOL_DISABLE_EXCEPTION_HANDLING will be automatically defined.
BS::thread_pool comes with a variety of methods to obtain information about the threads in the pool:
BS::this_thread provides functionality similar to std::this_thread . If the current thread belongs to a BS::thread_pool object, then BS::this_thread::get_index() can be used to get the index of the current thread, and BS::this_thread::get_pool() can be used to get the pointer to the thread pool that owns the current thread. Please see the reference below for more details.get_thread_ids() returns a vector containing the unique identifiers for each of the pool's threads, as obtained by std::thread::get_id() . These values are not so useful on their own, but can be used for whatever the user wants to use them for.get_native_handles() , if enabled, returns a vector containing the underlying implementation-defined thread handles for each of the pool's threads, as obtained by std::thread::native_handle() . For more information, see the relevant section below. Sometimes, it is necessary to initialize the threads before they run any tasks. This can be done by submitting a proper initialization function to the constructor or to reset() , either as the only argument or as the second argument after the desired number of threads. The thread initialization must take no arguments and have no return value. However, if needed, the function can use BS::this_thread::get_index() and BS::this_thread::get_pool() to figure out which thread and pool it belongs to.
The thread initialization function is submitted as a set of special tasks, one per thread, which bypass the queue, but still count towards the number of running tasks, which means get_tasks_total() and get_tasks_running() will report that these tasks are running if they are checked immediately after the pool is initialized.
This is done so that the user has the option to either wait for the initialization tasks to finish, by calling wait() on the pool, or just keep going. In either case, the initialization tasks will always finish executing before any tasks are picked out of the queue, so there is no reason to wait for them to finish unless they have some side-effects that affect the main thread.
Here is a simple example:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include " BS_thread_pool_utils.hpp " // BS::synced_stream
# include < random > // std::mt19937_64, std::random_device
BS::synced_stream sync_out;
thread_local std::mt19937_64 twister;
int main ()
{
BS::thread_pool pool (
[]
{
twister. seed ( std::random_device ()());
});
pool. submit_sequence ( 0 , 4 ,
[]( int )
{
sync_out. println ( " I generated a random number: " , twister ());
})
. wait ();
} In this example, we create a thread_local Mersenne twister engine, meaning that each thread has its own independent engine. However, we did not seed the engine, so each thread will generate the exact same sequence of pseudo-random numbers. To remedy this, we pass an initialization function to the BS::thread_pool constructor which seeds the twister in each thread with the (hopefully) non-deterministic random number generator std::random_device .
In C++, it is often crucial to pass function arguments by reference or constant reference, instead of by value. This allows the function to access the object being passed directly, rather than creating a new copy of the object. We have already seen that submitting an argument by reference is a simple matter of capturing it with a & in the lambda capture list. To submit as constant reference, we can use std::as_const as in the following example:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include " BS_thread_pool_utils.hpp " // BS::synced_stream
# include < utility > // std::as_const
BS::synced_stream sync_out;
void increment ( int & x)
{
++x;
}
void print ( const int & x)
{
sync_out. println (x);
}
int main ()
{
BS::thread_pool pool;
int n = 0 ;
pool. submit_task (
[&n]
{
increment (n);
})
. wait ();
pool. submit_task (
[&n = std::as_const (n)]
{
print (n);
})
. wait ();
} The increment() function takes a reference to an integer, and increments that integer. Passing the argument by reference guarantees that n itself, in the scope of main() , will be incremented - rather than a copy of it in the scope of increment() .
Similarly, the print() function takes a constant reference to an integer, and prints that integer. Passing the argument by constant reference guarantees that the variable will not be accidentally modified by the function, even though we are accessing n itself, rather than a copy. If we replace print with increment , the program won't compile, as increment cannot take constant references.
Generally, it is not really necessary to pass arguments by constant reference, but it is more "correct" to do so, if we would like to guarantee that the variable being referenced is indeed never modified. This section is therefore included here for completeness.
Sometimes you may wish to temporarily pause the execution of tasks, or perhaps you want to submit tasks to the queue in advance and only start executing them at a later time. You can do this using the member functions pause() , unpause() , and is_paused() .
However, these functions are disabled by default, and must be explicitly enabled by defining the macro BS_THREAD_POOL_ENABLE_PAUSE before including BS_thread_pool.hpp . The reason is that pausing the pool adds additional checks to the waiting and worker functions, which have a very small but non-zero overhead.
When you call pause() , the workers will temporarily stop retrieving new tasks out of the queue. However, any tasks already executed will keep running until they are done, since the thread pool has no control over the internal code of your tasks. If you need to pause a task in the middle of its execution, you must do that manually by programming your own pause mechanism into the task itself. To resume retrieving tasks, call unpause() . To check whether the pool is currently paused, call is_paused() .
Here is an example:
# define BS_THREAD_POOL_ENABLE_PAUSE
# include " BS_thread_pool.hpp " // BS::thread_pool
# include " BS_thread_pool_utils.hpp " // BS::synced_stream
# include < chrono > // std::chrono
# include < thread > // std::this_thread
BS::synced_stream sync_out;
BS::thread_pool pool ( 4 );
void sleep_half_second ( const int i)
{
std::this_thread::sleep_for ( std::chrono::milliseconds ( 500 ));
sync_out. println ( " Task " , i, " done. " );
}
void check_if_paused ()
{
if (pool. is_paused ())
sync_out. println ( " Pool paused. " );
else
sync_out. println ( " Pool unpaused. " );
}
int main ()
{
pool. detach_sequence ( 0 , 8 , sleep_half_second);
sync_out. println ( " Submitted 8 tasks. " );
std::this_thread::sleep_for ( std::chrono::milliseconds ( 250 ));
pool. pause ();
check_if_paused ();
std::this_thread::sleep_for ( std::chrono::milliseconds ( 1000 ));
sync_out. println ( " Still paused... " );
std::this_thread::sleep_for ( std::chrono::milliseconds ( 1000 ));
pool. detach_sequence ( 8 , 12 , sleep_half_second);
sync_out. println ( " Submitted 4 more tasks. " );
sync_out. println ( " Still paused... " );
std::this_thread::sleep_for ( std::chrono::milliseconds ( 1000 ));
pool. unpause ();
check_if_paused ();
}Assuming you have at least 4 hardware threads, the output should be similar to:
Submitted 8 tasks.
Pool paused.
Task 0 done.
Task 1 done.
Task 2 done.
Task 3 done.
Still paused...
Submitted 4 more tasks.
Still paused...
Pool unpaused.
Task 4 done.
Task 5 done.
Task 6 done.
Task 7 done.
Task 8 done.
Task 9 done.
Task 10 done.
Task 11 done.
Here is what happened. We initially submitted a total of 8 tasks to the queue. Since we waited for 250ms before pausing, the first 4 tasks have already started running, so they kept running until they finished. While the pool was paused, we submitted 4 more tasks to the queue, but they just waited at the end of the queue. When we unpaused, the remaining 4 initial tasks were executed, followed by the 4 new tasks.
While the workers are paused, wait() will wait for the running tasks instead of all tasks (otherwise it would wait forever). This is demonstrated by the following program:
# define BS_THREAD_POOL_ENABLE_PAUSE
# include " BS_thread_pool.hpp " // BS::thread_pool
# include " BS_thread_pool_utils.hpp " // BS::synced_stream
# include < chrono > // std::chrono
# include < thread > // std::this_thread
BS::synced_stream sync_out;
BS::thread_pool pool ( 4 );
void sleep_half_second ( const int i)
{
std::this_thread::sleep_for ( std::chrono::milliseconds ( 500 ));
sync_out. println ( " Task " , i, " done. " );
}
void check_if_paused ()
{
if (pool. is_paused ())
sync_out. println ( " Pool paused. " );
else
sync_out. println ( " Pool unpaused. " );
}
int main ()
{
pool. detach_sequence ( 0 , 8 , sleep_half_second);
sync_out. println ( " Submitted 8 tasks. Waiting for them to complete. " );
pool. wait ();
pool. detach_sequence ( 8 , 20 , sleep_half_second);
sync_out. println ( " Submitted 12 more tasks. " );
std::this_thread::sleep_for ( std::chrono::milliseconds ( 250 ));
pool. pause ();
check_if_paused ();
sync_out. println ( " Waiting for the " , pool. get_tasks_running (), " running tasks to complete. " );
pool. wait ();
sync_out. println ( " All running tasks completed. " , pool. get_tasks_queued (), " tasks still queued. " );
std::this_thread::sleep_for ( std::chrono::milliseconds ( 1000 ));
sync_out. println ( " Still paused... " );
std::this_thread::sleep_for ( std::chrono::milliseconds ( 1000 ));
sync_out. println ( " Still paused... " );
std::this_thread::sleep_for ( std::chrono::milliseconds ( 1000 ));
pool. unpause ();
check_if_paused ();
std::this_thread::sleep_for ( std::chrono::milliseconds ( 250 ));
sync_out. println ( " Waiting for the remaining " , pool. get_tasks_total (), " tasks ( " , pool. get_tasks_running (), " running and " , pool. get_tasks_queued (), " queued) to complete. " );
pool. wait ();
sync_out. println ( " All tasks completed. " );
}The output should be similar to:
Submitted 8 tasks. Waiting for them to complete.
Task 0 done.
Task 1 done.
Task 2 done.
Task 3 done.
Task 4 done.
Task 5 done.
Task 6 done.
Task 7 done.
Submitted 12 more tasks.
Pool paused.
Waiting for the 4 running tasks to complete.
Task 8 done.
Task 9 done.
Task 10 done.
Task 11 done.
All running tasks completed. 8 tasks still queued.
Still paused...
Still paused...
Pool unpaused.
Waiting for the remaining 8 tasks (4 running and 4 queued) to complete.
Task 12 done.
Task 13 done.
Task 14 done.
Task 15 done.
Task 16 done.
Task 17 done.
Task 18 done.
Task 19 done.
All tasks completed.
The first wait() , which was called while the pool was not paused, waited for all 8 tasks, both running and queued. The second wait() , which was called after pausing the pool, only waited for the 4 running tasks, while the other 8 tasks remained queued, and were not executed since the pool was paused. Finally, the third wait() , which was called after unpausing the pool, waited for the remaining 8 tasks, both running and queued.
Warning: If the thread pool is destroyed while paused, any tasks still in the queue will never be executed!
Consider the following program:
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < iostream > // std::cout
int main ()
{
BS::thread_pool pool;
pool. detach_task (
[&pool]
{
pool. wait ();
std::cout << " Done waiting. n " ;
});
}This program creates a thread pool, and then detaches a task that waits for tasks in the same thread pool to complete. If you run this program, it will never print the message "Done waiting", because the task will wait for itself to complete. This causes a deadlock , and the program will wait forever.
Usually, in simple programs, this will never happen. However, in more complicated programs, perhaps ones running multiple thread pools in parallel, wait deadlocks could potentially occur. In such cases, the macro BS_THREAD_POOL_ENABLE_WAIT_DEADLOCK_CHECK can be defined before including BS_thread_pool.hpp . wait() will then check whether the user tried to call it from within a thread of the same pool, and if so, it will throw the exception BS::thread_pool::wait_deadlock instead of waiting. This check is disabled by default because wait deadlocks are not something that happens often, and the check adds a small but non-zero overhead every time wait() is called.
Here is an example:
# define BS_THREAD_POOL_ENABLE_WAIT_DEADLOCK_CHECK
# include " BS_thread_pool.hpp " // BS::thread_pool
# include < iostream > // std::cout
int main ()
{
BS::thread_pool pool;
pool. detach_task (
[&pool]
{
try
{
pool. wait ();
std::cout << " Done waiting. n " ;
}
catch ( const BS::thread_pool::wait_deadlock&)
{
std::cout << " Error: Deadlock! n " ;
}
});
} This time, wait() will detect the deadlock, and will throw an exception, causing the output to be "Error: Deadlock!" .
Note that if the feature-test macro __cpp_exceptions is undefined, BS_THREAD_POOL_ENABLE_WAIT_DEADLOCK_CHECK will be automatically undefined.
The BS::thread_pool member function get_native_handles() returns a vector containing the underlying implementation-defined thread handles for each of the pool's threads. These can then be used in an implementation-specific way to manage the threads at the OS level
However, note that this will generally not be portable code. Furthermore, this feature uses std::thread::native_handle(), which is in the C++ standard library, but is not guaranteed to be present on all systems. Therefore, this feature is turned off by default, and must be turned on by defining the macro BS_THREAD_POOL_ENABLE_NATIVE_HANDLES before including BS_thread_pool.hpp .
Here is an example:
# define BS_THREAD_POOL_ENABLE_NATIVE_HANDLES
# include " BS_thread_pool.hpp " // BS::thread_pool
# include " BS_thread_pool_utils.hpp " // BS::synced_stream
# include < thread > // std::thread
# include < vector > // std::vector
BS::synced_stream sync_out;
BS::thread_pool pool ( 4 );
int main ()
{
std::vector<std::thread::native_handle_type> handles = pool. get_native_handles ();
for (BS:: concurrency_t i = 0 ; i < handles. size (); ++i)
sync_out. println ( " Thread " , i, " native handle: " , handles[i]);
}The output will depend on your compiler and operating system. Here is an example:
Thread 0 native handle: 000000F4
Thread 1 native handle: 000000F8
Thread 2 native handle: 000000EC
Thread 3 native handle: 000000FC
Defining the macro BS_THREAD_POOL_ENABLE_PRIORITY before including BS_thread_pool.hpp enables task priority. The priority of a task or group of tasks may then be specified as an additional argument (at the end of the argument list) to detach_task() , submit_task() , detach_blocks() , submit_blocks() , detach_loop() , submit_loop() , detach_sequence() , and submit_sequence() . If the priority is not specified, the default value will be 0.
The priority is a number of type BS::priority_t , which is a signed 16-bit integer, so it can have any value between -32,768 and 32,767. The tasks will be executed in priority order from highest to lowest. If priority is assigned to the block/loop/sequence parallelization functions, which submit multiple tasks, then all of these tasks will have the same priority.
The namespace BS::pr contains some pre-defined priorities for users who wish to avoid magic numbers and enjoy better future-proofing. In order of decreasing priority, the pre-defined priorities are: BS::pr::highest , BS::pr::high , BS::pr::normal , BS::pr::low , and BS::pr::lowest .
Here is a simple example:
# define BS_THREAD_POOL_ENABLE_PRIORITY
# include " BS_thread_pool.hpp " // BS::thread_pool
# include " BS_thread_pool_utils.hpp " // BS::synced_stream
BS::synced_stream sync_out;
BS::thread_pool pool ( 1 );
int main ()
{
pool. detach_task ([] { sync_out. println ( " This task will execute third. " ); }, BS::pr:: normal );
pool. detach_task ([] { sync_out. println ( " This task will execute fifth. " ); }, BS::pr::lowest);
pool. detach_task ([] { sync_out. println ( " This task will execute second. " ); }, BS::pr::high);
pool. detach_task ([] { sync_out. println ( " This task will execute first. " ); }, BS::pr::highest);
pool. detach_task ([] { sync_out. println ( " This task will execute fourth. " ); }, BS::pr::low);
}This program will print out the tasks in the correct priority order. Note that for simplicity, we used a pool with just one thread, so the tasks will run one at a time. In a pool with 5 or more threads, all 5 tasks will actually run more or less at the same time, because, for example, the task with the second-highest priority will be picked up by another thread while the task with the highest priority is still running.
Of course, this is just a pedagogical example. In a realistic use case we may want, for example, to submit tasks that must be completed immediately with high priority so they skip over other tasks already in the queue, or background non-urgent tasks with low priority so they evaluate only after higher-priority tasks are done.
Here are some subtleties to note when using task priority:
std::priority_queue , which has O(log n) complexity for storing new tasks, but only O(1) complexity for retrieving the next (ie highest-priority) task. This is in contrast with std::queue , used if priority is disabled, which both stores and retrieves with O(1) complexity.std::priority_queue as a binary heap, which means tasks are stored as a binary tree instead of sequentially. To execute tasks in submission order, give them monotonically decreasing priorities.BS::priority_t is defined to be ( std::int_least16_t ), since this type is guaranteed to be present on all systems, rather than std::int16_t , which is optional in the C++ standard. This means that on some exotic systems BS::priority_t may actually have more than 16 bits. However, the pre-defined priorities are 100% portable, and will always have the same values (eg: BS::pr::highest = 32767 ) regardless of the actual bit width. The file BS_thread_pool_test.cpp in the tests folder of the GitHub repository will perform automated tests of all aspects of the library. The output will be printed both to std::cout and to a file with the same name as the executable and the suffix -yyyy-mm-dd_hh.mm.ss.log based on the current date and time. In addition, the code is meant to serve as an extensive example of how to properly use the library.
Please make sure to:
BS_thread_pool_test.cpp with optimization flags enabled (eg -O3 on GCC / Clang or /O2 on MSVC).The test program also takes command line arguments for automation purposes:
help : Show a help message and exit. Any other arguments will be ignored.log : Create a log file.tests : Perform standard tests.deadlock Perform long deadlock tests.benchmarks : Perform benchmarks. If no options are entered, the default is: log tests benchmarks .
By default, the test program enables all the optional features by defining the suitable macros, so it can test them. However, if the macro BS_THREAD_POOL_LIGHT_TEST is defined during compilation, the optional features will not be tested.
A PowerShell script, BS_thread_pool_test.ps1 , is provided for your convenience in the tests folder to make running the test on multiple compilers and operating systems easier. Since it is written in PowerShell, it is fully portable and works on Windows, Linux, and macOS. The script will automatically detect if Clang, GCC, and/or MSVC are available, and compile the test program using each available compiler twice - with and without all the optional features. It will then run each compiled test program and report on any errors.
If any of the tests fail, please submit a bug report including the exact specifications of your system (OS, CPU, compiler, etc.) and the generated log file.
If all checks passed, BS_thread_pool_test.cpp performs simple benchmarks by filling a very large vector with values using detach_blocks() . The program decides what the size of the vector should be by testing how many elements are needed to reach a certain target duration when parallelizing using a number of blocks equal to the number of threads. This ensures that the test takes approximately the same amount of time on all systems, and is thus more consistent and portable.
Once the appropriate size of the vector has been determined, the program allocates the vector and fills it with values, calculated according to a fixed prescription. This operation is performed both single-threaded and multithreaded, with the multithreaded computation spread across multiple tasks submitted to the pool.
Several different multithreaded tests are performed, with the number of tasks either equal to, smaller than, or larger than the pool's thread count. Each test is repeated multiple times, with the run times averaged over all runs of the same test. The program keeps increasing the number of blocks by a factor of 2 until diminishing returns are encountered. The run times of the tests are compared, and the maximum speedup obtained is calculated.
As an example, here are the results of the benchmarks from a Digital Research Alliance of Canada node equipped with two 20-core / 40-thread Intel Xeon Gold 6148 CPUs (for a total of 40 cores and 80 threads), running CentOS Linux 7.9.2009. The tests were compiled using GCC v13.2.0 with the -O3 and -march=native flags. The output was as follows:
======================
Performing benchmarks:
======================
Using 80 threads.
Determining the number of elements to generate in order to achieve an approximate mean execution time of 50 ms with 80 tasks...
Each test will be repeated up to 30 times to collect reliable statistics.
Generating 27962000 elements:
[......]
Single-threaded, mean execution time was 2815.2 ms with standard deviation 3.5 ms.
[......]
With 2 tasks, mean execution time was 1431.3 ms with standard deviation 10.1 ms.
[.......]
With 4 tasks, mean execution time was 722.1 ms with standard deviation 11.4 ms.
[..............]
With 8 tasks, mean execution time was 364.9 ms with standard deviation 10.9 ms.
[............................]
With 16 tasks, mean execution time was 181.9 ms with standard deviation 8.0 ms.
[..............................]
With 32 tasks, mean execution time was 110.6 ms with standard deviation 1.8 ms.
[..............................]
With 64 tasks, mean execution time was 64.0 ms with standard deviation 6.3 ms.
[..............................]
With 128 tasks, mean execution time was 59.8 ms with standard deviation 0.8 ms.
[..............................]
With 256 tasks, mean execution time was 59.0 ms with standard deviation 0.0 ms.
[..............................]
With 512 tasks, mean execution time was 52.8 ms with standard deviation 0.4 ms.
[..............................]
With 1024 tasks, mean execution time was 50.7 ms with standard deviation 0.9 ms.
[..............................]
With 2048 tasks, mean execution time was 50.0 ms with standard deviation 0.5 ms.
[..............................]
With 4096 tasks, mean execution time was 49.4 ms with standard deviation 0.5 ms.
[..............................]
With 8192 tasks, mean execution time was 50.2 ms with standard deviation 0.4 ms.
Maximum speedup obtained by multithreading vs. single-threading: 56.9x, using 4096 tasks.
+++++++++++++++++++++++++++++++++++++++
Thread pool performance test completed!
+++++++++++++++++++++++++++++++++++++++
These two CPUs have 40 physical cores in total, with each core providing two separate logical cores via hyperthreading, for a total of 80 threads. Without hyperthreading, we would expect a maximum theoretical speedup of 40x. With hyperthreading, one might naively expect to achieve up to an 80x speedup, but this is in fact impossible, as each pair of hyperthreaded logical cores share the same physical core's resources. However, generally we would expect at most an estimated 30% additional speedup from hyperthreading, which amounts to around 52x in this case. The speedup of 56.9x in our performance test exceeds this estimate.
If you are using the vcpkg C/C++ package manager, you can easily install BS::thread_pool with the following commands:
On Linux/macOS:
./vcpkg install bshoshany-thread-pool
On Windows:
.vcpkg install bshoshany-thread-pool:x86-windows bshoshany-thread-pool:x64-windows
To update the package to the latest version, run:
vcpkg upgrade
If you are using the Conan C/C++ package manager, you can easily integrate BS::thread_pool into your project by adding the following lines to your conanfile.txt :
[requires]
bshoshany-thread-pool/4.1.0To update the package to the latest version, simply change the version number. Please refer to this package's page on ConanCenter for more information.
If you are using the Meson build system, you can install BS::thread_pool from WrapDB. To do so, create a subprojects folder in your project (if it does not already exist) and run the following command:
meson wrap install bshoshany-thread-pool
Then, use dependency('bshoshany-thread-pool') in your meson.build file to include the package. To update the package to the latest version, run:
meson wrap update bshoshany-thread-pool
If you are using CMake, you can install BS::thread_pool with CPM. If CPM is already installed, simply add the following to your project's CMakeLists.txt :
CPMAddPackage(
NAME BS_thread_pool
GITHUB_REPOSITORY bshoshany/thread-pool
VERSION 4.1.0)
add_library (BS_thread_pool INTERFACE )
target_include_directories (BS_thread_pool INTERFACE ${BS_thread_pool_SOURCE_DIR} / include )This will automatically download the indicated version of the package from the GitHub repository and include it in your project.
It is also possible to use CPM without installing it first, by adding the following lines to CMakeLists.txt before CPMAddPackage :
set (CPM_DOWNLOAD_LOCATION " ${CMAKE_BINARY_DIR} /cmake/CPM.cmake" )
if ( NOT ( EXISTS ${CPM_DOWNLOAD_LOCATION} ))
message ( STATUS "Downloading CPM.cmake" )
file (DOWNLOAD https://github.com/cpm-cmake/CPM.cmake/releases/latest/download/CPM.cmake ${CPM_DOWNLOAD_LOCATION} )
endif ()
include ( ${CPM_DOWNLOAD_LOCATION} ) Here is an example of a complete CMakeLists.txt for a project named my_project consisting of a single source file main.cpp which uses BS_thread_pool.hpp :
cmake_minimum_required ( VERSION 3.19)
project (my_project LANGUAGES CXX)
set (CMAKE_CXX_STANDARD 17)
set (CMAKE_CXX_STANDARD_REQUIRED ON )
set (CMAKE_CXX_EXTENSIONS OFF )
set (CPM_DOWNLOAD_LOCATION " ${CMAKE_BINARY_DIR} /cmake/CPM.cmake" )
if ( NOT ( EXISTS ${CPM_DOWNLOAD_LOCATION} ))
message ( STATUS "Downloading CPM.cmake" )
file (DOWNLOAD https://github.com/cpm-cmake/CPM.cmake/releases/latest/download/CPM.cmake ${CPM_DOWNLOAD_LOCATION} )
endif ()
include ( ${CPM_DOWNLOAD_LOCATION} )
CPMAddPackage(
NAME BS_thread_pool
GITHUB_REPOSITORY bshoshany/thread-pool
VERSION 4.1.0)
add_library (BS_thread_pool INTERFACE )
target_include_directories (BS_thread_pool INTERFACE ${BS_thread_pool_SOURCE_DIR} / include )
add_executable (my_project main.cpp)
target_link_libraries (my_project BS_thread_pool) With both CMakeLists.txt and main.cpp in the same folder, type the following commands to build the project:
cmake -S . -B build
cmake --build build
This section provides a complete reference to classes, member functions, objects, and macros available in this library, along with other important information. Member functions are given here with simplified prototypes (eg removing const ) for ease of reading.
More information can be found in the provided Doxygen comments. Any modern IDE, such as Visual Studio Code, can use the Doxygen comments to provide automatic documentation for any class and member function in this library when hovering over code with the mouse or using auto-complete.
BS_thread_pool.hpp ) BS::thread_pool class The class BS::thread_pool is the main thread pool class. It can be used to create a pool of threads and submit tasks to a queue. When a thread becomes available, it takes a task from the queue and executes it. The member functions that are available by default, when no macros are defined, are:
thread_pool() : Construct a new thread pool. The number of threads will be the total number of hardware threads available, as reported by the implementation. This is usually determined by the number of cores in the CPU. If a core is hyperthreaded, it will count as two threads.thread_pool(BS::concurrency_t num_threads) : Construct a new thread pool with the specified number of threads.thread_pool(std::function<void()>& init_task) : Construct a new thread pool with the specified initialization function.thread_pool(BS::concurrency_t num_threads, std::function<void()>& init_task) : Construct a new thread pool with the specified number of threads and initialization function.void reset() : Reset the pool with the total number of hardware threads available, as reported by the implementation. Waits for all currently running tasks to be completed, then destroys all threads in the pool and creates a new thread pool with the new number of threads. Any tasks that were waiting in the queue before the pool was reset will then be executed by the new threads. If the pool was paused before resetting it, the new pool will be paused as well.void reset(BS::concurrency_t num_threads) : Reset the pool with a new number of threads.void reset(std::function<void()>& init_task) Reset the pool with the total number of hardware threads available, as reported by the implementation, and a new initialization function.void reset(BS::concurrency_t num_threads, std::function<void()>& init_task) : Reset the pool with a new number of threads and a new initialization function.size_t get_tasks_queued() : Get the number of tasks currently waiting in the queue to be executed by the threads.size_t get_tasks_running() : Get the number of tasks currently being executed by the threads.size_t get_tasks_total() : Get the total number of unfinished tasks: either still waiting in the queue, or running in a thread. Note that get_tasks_total() == get_tasks_queued() + get_tasks_running() .BS::concurrency_t get_thread_count() : Get the number of threads in the pool.std::vector<std::thread::id> get_thread_ids() : Get a vector containing the unique identifiers for each of the pool's threads, as obtained by std::thread::get_id() .T and F are template parameters):void detach_task(F&& task) : Submit a function with no arguments and no return value into the task queue. To push a function with arguments, enclose it in a lambda expression. Does not return a future, so the user must use wait() or some other method to ensure that the task finishes executing, otherwise bad things will happen.void detach_blocks(T first_index, T index_after_last, F&& block, size_t num_blocks = 0) : Parallelize a loop by automatically splitting it into blocks and submitting each block separately to the queue. The block function takes two arguments, the start and end of the block, so that it is only called only once per block, but it is up to the user make sure the block function correctly deals with all the indices in each block. Does not return a BS::multi_future , so the user must use wait() or some other method to ensure that the loop finishes executing, otherwise bad things will happen.void detach_loop(T first_index, T index_after_last, F&& loop, size_t num_blocks = 0) : Parallelize a loop by automatically splitting it into blocks and submitting each block separately to the queue. The loop function takes one argument, the loop index, so that it is called many times per block. Does not return a BS::multi_future , so the user must use wait() or some other method to ensure that the loop finishes executing, otherwise bad things will happen.void detach_sequence(T first_index, T index_after_last, F&& sequence) : Submit a sequence of tasks enumerated by indices to the queue. Does not return a BS::multi_future , so the user must use wait() or some other method to ensure that the sequence finishes executing, otherwise bad things will happen.T , F , and R are template parameters):std::future<R> submit_task(F&& task) : Submit a function with no arguments into the task queue. To submit a function with arguments, enclose it in a lambda expression. If the function has a return value, get a future for the eventual returned value. If the function has no return value, get an std::future<void> which can be used to wait until the task finishes.BS::multi_future<R> submit_blocks(T first_index, T index_after_last, F&& block, size_t num_blocks = 0) : Parallelize a loop by automatically splitting it into blocks and submitting each block separately to the queue. The block function takes two arguments, the start and end of the block, so that it is only called only once per block, but it is up to the user make sure the block function correctly deals with all the indices in each block. Returns a BS::multi_future that contains the futures for all of the blocks.BS::multi_future<void> submit_loop(T first_index, T index_after_last, F&& loop, size_t num_blocks = 0) : Parallelize a loop by automatically splitting it into blocks and submitting each block separately to the queue. The loop function takes one argument, the loop index, so that it is called many times per block. It must have no return value. Returns a BS::multi_future that contains the futures for all of the blocks.BS::multi_future<R> submit_sequence(T first_index, T index_after_last, F&& sequence) : Submit a sequence of tasks enumerated by indices to the queue. Returns a BS::multi_future that contains the futures for all of the tasks.void purge() : Purge all the tasks waiting in the queue. Tasks that are currently running will not be affected, but any tasks still waiting in the queue will be discarded, and will never be executed by the threads. Please note that there is no way to restore the purged tasks.R and P , C , and D are template parameters):void wait() : Wait for tasks to be completed. Normally, this function waits for all tasks, both those that are currently running in the threads and those that are still waiting in the queue. However, if the pool is paused, this function only waits for the currently running tasks (otherwise it would wait forever). Note: To wait for just one specific task, use submit_task() instead, and call the wait() member function of the generated future.bool wait_for(std::chrono::duration<R, P>& duration) : Wait for tasks to be completed, but stop waiting after the specified duration has passed. Returns true if all tasks finished running, false if the duration expired but some tasks are still running.bool wait_until(std::chrono::time_point<C, D>& timeout_time) : Wait for tasks to be completed, but stop waiting after the specified time point has been reached. Returns true if all tasks finished running, false if the time point was reached but some tasks are still running. When a BS::thread_pool object goes out of scope, the destructor first waits for all tasks to complete, then destroys all threads. Note that if the pool is paused, then any tasks still in the queue will never be executed.
BS::thread_pool classThe thread pool has several optional features that must be explicitly enabled using macros.
BS_THREAD_POOL_ENABLE_PRIORITY .detach_task() , submit_task() , detach_blocks() , submit_blocks() , detach_loop() , submit_loop() , detach_sequence() , and submit_sequence() . If the priority is not specified, the default value will be 0.BS::priority_t , which is a signed 16-bit integer, so it can have any value between -32,768 and 32,767. The tasks will be executed in priority order from highest to lowest.BS::pr contains some pre-defined priorities: BS::pr::highest , BS::pr::high , BS::pr::normal , BS::pr::low , and BS::pr::lowest .BS_THREAD_POOL_ENABLE_PAUSE . Adds the following member functions:void pause() : Pause the pool. The workers will temporarily stop retrieving new tasks out of the queue, although any tasks already executed will keep running until they are finished.void unpause() : Unpause the pool. The workers will resume retrieving new tasks out of the queue.bool is_paused() : Check whether the pool is currently paused.BS_THREAD_POOL_ENABLE_NATIVE_HANDLES . Adds the following member function:std::vector<std::thread::native_handle_type> get_native_handles() : Get a vector containing the underlying implementation-defined thread handles for each of the pool's threads.BS_THREAD_POOL_ENABLE_WAIT_DEADLOCK_CHECK .wait() , wait_for() , and wait_until() will check whether the user tried to call them from within a thread of the same pool, which would result in a deadlock. If so, they will throw the exception BS::thread_pool::wait_deadlock instead of waiting.BS_THREAD_POOL_DISABLE_EXCEPTION_HANDLING .submit_task() if it is not needed, or if exceptions are explicitly disabled in the codebase.BS_THREAD_POOL_ENABLE_WAIT_DEADLOCK_CHECK . Disabling exception handling removes the try - catch block from submit_task() , while enabling wait deadlock checks adds a throw expression to wait() , wait_for() , and wait_until() .__cpp_exceptions is undefined, BS_THREAD_POOL_DISABLE_EXCEPTION_HANDLING is automatically defined, and BS_THREAD_POOL_ENABLE_WAIT_DEADLOCK_CHECK is automatically undefined. BS::this_thread namespace The namespace BS::this_thread provides functionality similar to std::this_thread . It contains the following function objects:
BS::this_thread::get_index() can be used to get the index of the current thread. If this thread belongs to a BS::thread_pool object, it will have an index from 0 to BS::thread_pool::get_thread_count() - 1 . Otherwise, for example if this thread is the main thread or an independent std::thread , std::nullopt will be returned.BS::this_thread::get_pool() can be used to get the pointer to the thread pool that owns the current thread. If this thread belongs to a BS::thread_pool object, a pointer to that object will be returned. Otherwise, std::nullopt will be returned.std::optional object will be returned, of type BS::this_thread::optional_index or BS::this_thread::optional_pool respectively. Unless you are 100% sure this thread is in a pool, first use std::optional::has_value() to check if it contains a value, and if so, use std::optional::value() to obtain that value. BS::multi_future<T> class BS::multi_future<T> is a helper class used to facilitate waiting for and/or getting the results of multiple futures at once. It is defined as a specialization of std::vector<std::future<T>> . This means that all of the member functions that can be used on an std::vector can also be used on a BS::multi_future . For example, you may use a range-based for loop with a BS::multi_future , since it has iterators.
In addition to inherited member functions, BS::multi_future has the following specialized member functions ( R and P , C , and D are template parameters):
[void or std::vector<T>] get() : Get the results from all the futures stored in this BS::multi_future , rethrowing any stored exceptions. If the futures return void , this function returns void as well. If the futures return a type T , this function returns a vector containing the results.size_t ready_count() : Check how many of the futures stored in this BS::multi_future are ready.bool valid() : Check if all the futures stored in this BS::multi_future are valid.void wait() : Wait for all the futures stored in this BS::multi_future .bool wait_for(std::chrono::duration<R, P>& duration) : Wait for all the futures stored in this BS::multi_future , but stop waiting after the specified duration has passed. Returns true if all futures have been waited for before the duration expired, false otherwise.bool wait_until(std::chrono::time_point<C, D>& timeout_time) : Wait for all the futures stored in this multi_future object, but stop waiting after the specified time point has been reached. Returns true if all futures have been waited for before the time point was reached, false otherwise.BS_thread_pool_utils.hpp ) BS::signaller class BS::signaller is a utility class which can be used to allow simple signalling between threads. This class is really just a convenient wrapper around std::promise , which contains both the promise and its future. It has the following member functions:
signaller() : Construct a new signaller.void wait() : Wait until the signaller is ready.void ready() : Inform any waiting threads that the signaller is ready. BS::synced_stream class BS::synced_stream is a utility class which can be used to synchronize printing to an output stream by different threads. It has the following member functions ( T is a template parameter pack):
synced_stream(std::ostream& stream = std::cout) : Construct a new synced stream which prints to the given output stream.void print(T&&... items) : Print any number of items into the output stream. Ensures that no other threads print to this stream simultaneously, as long as they all exclusively use the same synced_stream object to print.void println(T&&... items) : Print any number of items into the output stream, followed by a newline character.In addition, the class comes with two stream manipulators, which are meant to help the compiler figure out which template specializations to use with the class:
BS::synced_stream::endl : An explicit cast of std::endl . Prints a newline character to the stream, and then flushes it. Should only be used if flushing is desired, otherwise a newline character should be used instead.BS::synced_stream::flush : An explicit cast of std::flush . Used to flush the stream. BS::timer class BS::timer is a utility class which can be used to measure execution time for benchmarking purposes. It has the following member functions:
timer() : Construct a new timer and immediately start measuring time.void start() : Start (or restart) measuring time. Note that the timer starts ticking as soon as the object is created, so this is only necessary if we want to restart the clock later.void stop() : Stop measuring time and store the elapsed time since the object was constructed or since start() was last called.std::chrono::milliseconds::rep current_ms() : Get the number of milliseconds that have elapsed since the object was constructed or since start() was last called, but keep the timer ticking.std::chrono::milliseconds::rep ms() : Get the number of milliseconds stored when stop() was last called. This library is under continuous and active development. If you encounter any bugs, or if you would like to request any additional features, please feel free to open a new issue on GitHub and I will look into it as soon as I can.
Contributions are always welcome. However, I release my projects in cumulative updates after editing and testing them locally on my system, so my policy is not to accept any pull requests. If you open a pull request, and I decide to incorporate your suggestion into the project, I will first modify your code to comply with the project's coding conventions (formatting, syntax, naming, comments, programming practices, etc.), and perform some tests to ensure that the change doesn't break anything. I will then merge it into the next release of the project, possibly together with some other changes. The new release will also include a note in CHANGELOG.md with a link to your pull request, and modifications to the documentation in README.md as needed.
Many GitHub users have helped improve this project, directly or indirectly, via issues, pull requests, comments, and/or personal correspondence. Please see CHANGELOG.md for links to specific issues and pull requests that have been the most helpful. Thank you all for your contribution! :)
If you found this project useful, please consider starring it on GitHub! This allows me to see how many people are using my code, and motivates me to keep working to improve it.
Copyright (c) 2024 Barak Shoshany. Licensed under the MIT license.
If you use this C++ thread pool library in software of any kind, please provide a link to the GitHub repository in the source code and documentation.
If you use this library in published research, please cite it as follows:
You can use the following BibTeX entry:
@article { Shoshany2024_ThreadPool ,
archiveprefix = { arXiv } ,
author = { Barak Shoshany } ,
doi = { 10.1016/j.softx.2024.101687 } ,
eprint = { 2105.00613 } ,
journal = { SoftwareX } ,
pages = { 101687 } ,
title = { {A C++17 Thread Pool for High-Performance Scientific Computing} } ,
url = { https://www.sciencedirect.com/science/article/pii/S235271102400058X } ,
volume = { 26 } ,
year = { 2024 }
} Please note that the papers on SoftwareX and arXiv are not up to date with the latest version of the library. These publications are only intended to facilitate discovery of this library by scientists, and to enable citing it in scientific research. Documentation for the latest version is provided only by the README.md file in the GitHub repository.
Beginner C++ programmers may be interested in my lecture notes for a course taught at McMaster University, which teach modern C and C++ from scratch, including some of the advanced techniques and programming practices used in developing this library.