> "Concurrency is not parallelism, although it enables parallelism."
Learning Objectives
- Explain why multithreading is necessary and what problems it solves
- Create threads using the TThread class and its Execute method
- Use Synchronize and Queue to safely update the GUI from worker threads
- Protect shared data with TCriticalSection to prevent race conditions
- Design thread-safe data structures
- Identify and fix common threading bugs: race conditions, deadlocks, and starvation
- Implement thread pools and work queues for efficient resource usage
- Add background database sync to PennyWise
In This Chapter
- 36.1 Why Multithreading?
- 36.2 TThread Class
- 36.3 Thread Synchronization
- 36.4 Critical Sections and Mutexes
- 36.5 Thread-Safe Data Structures
- 36.6 The Main Thread and GUI Updates
- 36.7 Thread Pools and Work Queues
- 36.8 Common Threading Bugs
- 36.9 Project Checkpoint: PennyWise Background Sync
- 36.10 Summary
Chapter 36: Multithreading and Concurrent Programming in Pascal
"Concurrency is not parallelism, although it enables parallelism." — Rob Pike
36.1 Why Multithreading?
Imagine Rosa is using PennyWise. She clicks "Sync" to upload her expenses to the server. The sync takes five seconds — the server is across the country, the data is large, the connection is slow. During those five seconds, the application freezes. The window goes gray. The cursor becomes an hourglass. Rosa cannot enter new expenses, cannot scroll through her history, cannot even close the application. She wonders if it has crashed.
This is the fundamental problem that multithreading solves.
A thread is an independent path of execution within a program. A single-threaded program has one thread that executes instructions sequentially — if that thread is busy waiting for a network response, everything else stops. A multithreaded program has multiple threads running concurrently. While one thread waits for the network, another thread keeps the user interface responsive. While one thread processes a large file, another thread updates a progress bar. While one thread handles an HTTP request, another thread handles a second request from a different client.
Why Not Just Write Faster Code?
Multithreading is not about making things faster (though it can). It is about making things responsive. Even if the sync must take five seconds (because the network is slow), multithreading ensures that those five seconds do not freeze the entire application. The user can continue working while the sync happens in the background.
There are also genuinely parallel workloads — tasks where multiple CPU cores can work on different parts simultaneously:
- Processing multiple files at once
- Computing results for different data subsets
- Handling multiple network connections
- Running background tasks (auto-save, spell-check, indexing)
Modern processors have 4, 8, 16, or more cores. A single-threaded program uses only one of them. A well-designed multithreaded program can use all of them.
Concurrency vs. Parallelism
These terms are often confused, but the distinction matters.
Concurrency means managing multiple tasks that can overlap in time. The tasks may not actually execute simultaneously — on a single-core CPU, concurrency is achieved by rapidly switching between tasks (time-slicing). The tasks appear to run simultaneously, but only one is actually executing at any moment.
Parallelism means actually executing multiple tasks simultaneously on different CPU cores. True parallelism requires multiple physical processing units.
A single-core machine with two threads has concurrency but not parallelism. A four-core machine with four threads has both. The distinction matters for performance: CPU-bound work benefits from parallelism (more cores = faster), while I/O-bound work benefits from concurrency (while one thread waits for the disk, another can use the CPU).
PennyWise's background sync is primarily I/O-bound (waiting for network responses), so it benefits from concurrency even on a single-core machine. A parallel report generator that crunches numbers across 100,000 expenses would benefit from parallelism on a multi-core machine.
Single-Threaded vs. Multi-Threaded: A Visual Comparison
Consider downloading three files, each taking 2 seconds:
Single-threaded (sequential):
File A: [========]
File B: [========]
File C: [========]
Total: 6 seconds
Multi-threaded (concurrent):
File A: [========]
File B: [========]
File C: [========]
Total: 2 seconds
The sequential version takes three times as long because each download waits for the previous one to finish. The multi-threaded version starts all three simultaneously — while one thread waits for network data from server A, another thread is receiving data from server B. The total time is limited by the slowest download, not the sum of all downloads.
This is not a theoretical improvement. For network-bound, I/O-bound, and user-facing applications, threading can provide dramatic responsiveness improvements. Even if total CPU work is the same, the perceived performance is radically better because the user is never waiting for something that could be happening in the background.
The Difficulty Warning
We will be direct: concurrent programming is hard. It is arguably the most difficult topic in this entire book. The bugs are subtle, intermittent, and fiendishly difficult to reproduce. A race condition might crash your program once in a thousand runs — exactly the kind of bug that passes testing and appears in production. A deadlock might happen only when two threads acquire locks in a specific order that depends on timing.
Pascal's explicit memory model and strong type system help. Unlike languages with garbage collection, Pascal makes you think about who owns what, which forces you to think about thread safety. Unlike dynamically typed languages, Pascal catches type mismatches at compile time, reducing one entire category of concurrent bugs. But the fundamental difficulty remains: when two threads access the same data, the result depends on timing, and timing is non-deterministic.
💡 The Golden Rule of Threading If two threads can access the same data, and at least one of them writes to it, you MUST synchronize access. No exceptions. No "it probably won't happen." If it can happen, it will — usually at 3 AM on a Saturday when you are not available to fix it.
36.2 TThread Class
Free Pascal provides threading through the TThread class in the Classes unit. To create a thread, you subclass TThread and override the Execute method.
Your First Thread
program FirstThread;
{$mode objfpc}{$H+}
uses
Classes, SysUtils;
type
TCounterThread = class(TThread)
private
FName: string;
FCount: Integer;
protected
procedure Execute; override;
public
constructor Create(const AName: string; ACount: Integer);
end;
constructor TCounterThread.Create(const AName: string; ACount: Integer);
begin
inherited Create(True); { Create suspended }
FName := AName;
FCount := ACount;
FreeOnTerminate := True;
end;
procedure TCounterThread.Execute;
var
I: Integer;
begin
for I := 1 to FCount do
begin
WriteLn(FName, ': ', I);
Sleep(100); { Simulate work }
end;
WriteLn(FName, ': Done!');
end;
var
Thread1, Thread2: TCounterThread;
begin
WriteLn('Main thread starting...');
Thread1 := TCounterThread.Create('Alpha', 5);
Thread2 := TCounterThread.Create('Beta', 5);
Thread1.Start;
Thread2.Start;
{ Wait for threads to finish }
Sleep(1000);
WriteLn('Main thread done.');
end.
Run this program several times. The output of Alpha and Beta will be interleaved — sometimes Alpha prints first, sometimes Beta, and the order changes between runs. This non-determinism is the fundamental nature of concurrent execution.
Key TThread Properties and Methods
| Member | Purpose |
|---|---|
Execute |
Override this — the thread's main procedure. Called when the thread starts. |
Create(CreateSuspended) |
Pass True to create the thread in suspended state. |
Start |
Resumes a suspended thread. |
Terminate |
Sets the Terminated property to True — a polite request for the thread to stop. |
Terminated |
Check this property periodically in Execute to know when to stop. |
WaitFor |
Blocks the calling thread until this thread finishes. |
FreeOnTerminate |
If True, the thread object is automatically freed when Execute returns. |
Synchronize(Method) |
Executes a method in the main thread's context (for GUI updates). |
Queue(Method) |
Like Synchronize but non-blocking — queues the method for later execution. |
The Thread Lifecycle
Create(True) → Suspended
│
Start → Running → Execute method runs
│ │
│ Checks Terminated periodically
│ │
│ Execute returns → Finished
│ │
│ FreeOnTerminate=True? → Auto-freed
│ FreeOnTerminate=False? → Must call Free manually
The Terminated Pattern
A well-behaved thread checks Terminated periodically and exits gracefully:
procedure TMyWorker.Execute;
var
I: Integer;
begin
I := 0;
while not Terminated do
begin
{ Do one unit of work }
ProcessNextItem;
Inc(I);
{ Brief sleep to prevent CPU spinning }
if not Terminated then
Sleep(10);
end;
WriteLn('Worker processed ', I, ' items before terminating.');
end;
The main thread calls Thread.Terminate to request a stop, then Thread.WaitFor to wait for the thread to finish:
Thread.Terminate; { Sets Terminated := True }
Thread.WaitFor; { Blocks until Execute returns }
Thread.Free; { Clean up }
The OnTerminate Event
TThread provides an OnTerminate event that fires when Execute returns — and it fires in the main thread's context, making it safe for GUI updates:
type
TFileProcessor = class(TThread)
private
FFilename: string;
FLineCount: Integer;
FSuccess: Boolean;
protected
procedure Execute; override;
public
constructor Create(const AFilename: string);
property LineCount: Integer read FLineCount;
property Success: Boolean read FSuccess;
end;
constructor TFileProcessor.Create(const AFilename: string);
begin
inherited Create(True);
FFilename := AFilename;
FLineCount := 0;
FSuccess := False;
FreeOnTerminate := True;
end;
procedure TFileProcessor.Execute;
var
F: TextFile;
Line: string;
begin
try
AssignFile(F, FFilename);
Reset(F);
try
while (not Terminated) and (not System.EOF(F)) do
begin
ReadLn(F, Line);
Inc(FLineCount);
{ Process the line... }
end;
FSuccess := True;
finally
CloseFile(F);
end;
except
on E: Exception do
begin
FSuccess := False;
{ Cannot update GUI here — store the error for OnTerminate }
end;
end;
end;
{ Main thread: }
procedure HandleProcessingDone(Sender: TObject);
var
Worker: TFileProcessor;
begin
Worker := TFileProcessor(Sender);
if Worker.Success then
WriteLn(Format('Processed %d lines successfully.', [Worker.LineCount]))
else
WriteLn('Processing failed.');
end;
var
Processor: TFileProcessor;
begin
Processor := TFileProcessor.Create('large_data.csv');
Processor.OnTerminate := @HandleProcessingDone;
Processor.Start;
{ Main thread continues immediately — OnTerminate fires when done }
end;
The OnTerminate event is particularly useful for fire-and-forget threads where FreeOnTerminate := True. You cannot call WaitFor on such threads (they might already be freed by the time you try), but OnTerminate gives you a safe callback when the work is done.
Creating Threads with Parameters
A common pattern is passing multiple parameters to a thread through its constructor:
type
TDownloadWorker = class(TThread)
private
FURL: string;
FOutputPath: string;
FTimeout: Integer;
FBytesDownloaded: Int64;
FError: string;
protected
procedure Execute; override;
public
constructor Create(const AURL, AOutputPath: string;
ATimeout: Integer = 30000);
property BytesDownloaded: Int64 read FBytesDownloaded;
property Error: string read FError;
end;
constructor TDownloadWorker.Create(const AURL, AOutputPath: string;
ATimeout: Integer);
begin
inherited Create(True); { Always create suspended }
FURL := AURL;
FOutputPath := AOutputPath;
FTimeout := ATimeout;
FBytesDownloaded := 0;
FError := '';
FreeOnTerminate := False;
end;
Always create threads suspended (pass True to the inherited constructor). This ensures you can set properties like FreeOnTerminate and OnTerminate before the thread starts executing. If you create it unsuspended, there is a race condition: the thread might finish and free itself before you set FreeOnTerminate.
⚠️ Never Kill a Thread Forcibly
There is no safe way to force-kill a thread in Pascal (or in most languages). Calling KillThread or TerminateThread (Windows API) can leave shared data in an inconsistent state, leak resources, and cause undefined behavior. Always use the cooperative Terminate/Terminated pattern.
36.3 Thread Synchronization
The moment two threads access the same variable, you are in dangerous territory. Consider this innocent-looking code:
var
Counter: Integer = 0;
procedure TThread1.Execute;
var I: Integer;
begin
for I := 1 to 1000000 do
Inc(Counter); { NOT thread-safe! }
end;
procedure TThread2.Execute;
var I: Integer;
begin
for I := 1 to 1000000 do
Inc(Counter); { NOT thread-safe! }
end;
If both threads increment Counter one million times, the final value should be 2,000,000. But it will not be. Run it ten times and you will get a different (wrong) answer each time — perhaps 1,347,892 or 1,856,441 or 1,999,103.
Why? Because Inc(Counter) is not atomic. At the machine level, it is three operations:
- Read the current value of Counter from memory into a register
- Add 1 to the register
- Write the new value from the register back to memory
If Thread 1 reads Counter=100, then Thread 2 reads Counter=100 (before Thread 1 writes), then Thread 1 writes 101, then Thread 2 writes 101 — one increment is lost. This is a race condition: the result depends on the relative timing of the two threads, which is unpredictable.
Synchronize and Queue
The simplest way to communicate between a worker thread and the main thread is Synchronize:
type
TDownloadThread = class(TThread)
private
FProgress: Integer;
FStatusMsg: string;
procedure UpdateUI;
protected
procedure Execute; override;
end;
procedure TDownloadThread.UpdateUI;
begin
{ This runs in the MAIN thread's context }
{ Safe to update GUI controls here }
WriteLn('Progress: ', FProgress, '% - ', FStatusMsg);
end;
procedure TDownloadThread.Execute;
var
I: Integer;
begin
for I := 1 to 100 do
begin
if Terminated then Exit;
Sleep(50); { Simulate download }
FProgress := I;
FStatusMsg := Format('Downloaded %d of 100 chunks', [I]);
Synchronize(@UpdateUI); { Blocks until main thread processes }
end;
end;
Synchronize pauses the worker thread, sends the method call to the main thread's message queue, and waits for the main thread to execute it. This is safe but slow — the worker thread is blocked during the GUI update.
Queue is the non-blocking alternative:
Queue(@UpdateUI); { Adds to queue and continues immediately }
The method will be executed by the main thread eventually (on its next message loop iteration), but the worker thread does not wait for it.
📊 Synchronize vs. Queue: A Detailed Comparison
| Feature | Synchronize |
Queue |
|---|---|---|
| Blocking | Yes — worker waits | No — worker continues |
| Execution timing | Immediate (next message loop) | Eventually (next message loop) |
| Return values | Can read results from GUI | Cannot — method executes later |
| Performance impact | Blocks worker thread | Minimal |
| Risk | Deadlock if main thread waits for worker | Queue overflow if called too frequently |
| Best for | Reading GUI state, modal dialogs | Progress updates, status messages |
| PennyWise usage | Sync confirmation dialog | Progress bar, status bar updates |
Use Synchronize when you need to read a result from the GUI (the worker must wait for the update to complete). Use Queue for fire-and-forget updates like progress bars and status messages (the worker continues immediately). In practice, Queue is preferred because it does not block the worker.
A subtle danger with Synchronize: if the main thread is waiting for the worker (via WaitFor), and the worker calls Synchronize (which waits for the main thread), you have a deadlock. The main thread is waiting for the worker to finish, and the worker is waiting for the main thread to process the synchronized call. Neither can proceed. The solution: never call WaitFor while the worker might call Synchronize. Use OnTerminate or Queue instead.
36.4 Critical Sections and Mutexes
For general-purpose thread synchronization (not just GUI updates), Pascal provides critical sections via the SyncObjs unit.
TCriticalSection
A critical section is a lock that ensures only one thread can execute a protected code block at a time:
uses
SyncObjs;
var
Counter: Integer = 0;
Lock: TCriticalSection;
procedure SafeIncrement;
begin
Lock.Enter;
try
Inc(Counter);
finally
Lock.Leave; { ALWAYS in a finally block }
end;
end;
When Thread 1 calls Lock.Enter, it acquires the lock. If Thread 2 then calls Lock.Enter, it blocks (waits) until Thread 1 calls Lock.Leave. This guarantees that only one thread executes the Inc(Counter) at a time — no more race conditions.
⚠️ Always Use try-finally
If the code between Enter and Leave raises an exception, and you do not have a finally block, the lock is never released. All other threads waiting for that lock will block forever. This is a deadlock caused by an exception. The try..finally pattern guarantees the lock is released regardless of what happens.
Protecting a Shared Data Structure
Here is a thread-safe counter class:
type
TThreadSafeCounter = class
private
FValue: Int64;
FLock: TCriticalSection;
public
constructor Create;
destructor Destroy; override;
procedure Increment;
procedure Decrement;
function GetValue: Int64;
end;
constructor TThreadSafeCounter.Create;
begin
inherited;
FValue := 0;
FLock := TCriticalSection.Create;
end;
destructor TThreadSafeCounter.Destroy;
begin
FLock.Free;
inherited;
end;
procedure TThreadSafeCounter.Increment;
begin
FLock.Enter;
try
Inc(FValue);
finally
FLock.Leave;
end;
end;
procedure TThreadSafeCounter.Decrement;
begin
FLock.Enter;
try
Dec(FValue);
finally
FLock.Leave;
end;
end;
function TThreadSafeCounter.GetValue: Int64;
begin
FLock.Enter;
try
Result := FValue;
finally
FLock.Leave;
end;
end;
Every method that reads or writes FValue acquires the lock first. This makes the class safe to use from any number of threads simultaneously.
Event Objects: TSimpleEvent
For more complex synchronization, Free Pascal provides TSimpleEvent (in SyncObjs), which lets one thread signal another that something has happened:
var
DataReady: TSimpleEvent;
{ Worker thread: }
procedure TWorker.Execute;
begin
{ Do work... }
PrepareData;
DataReady.SetEvent; { Signal that data is ready }
end;
{ Main thread: }
begin
DataReady := TSimpleEvent.Create;
try
StartWorker;
DataReady.WaitFor(INFINITE); { Block until signaled }
ProcessReadyData;
finally
DataReady.Free;
end;
end;
Events are more efficient than polling (repeatedly checking a boolean in a loop with Sleep), because the waiting thread truly sleeps until signaled — it uses no CPU time while waiting.
Producer-Consumer with TCriticalSection
The producer-consumer pattern is one of the most common threading patterns: one or more threads produce data, and one or more threads consume it. A shared buffer sits between them, protected by a lock:
type
TProducerConsumerQueue = class
private
FItems: array of string;
FHead, FTail, FCount: Integer;
FCapacity: Integer;
FLock: TCriticalSection;
public
constructor Create(Capacity: Integer);
destructor Destroy; override;
function Enqueue(const Item: string): Boolean;
function Dequeue(out Item: string): Boolean;
function Count: Integer;
function IsFull: Boolean;
function IsEmpty: Boolean;
end;
constructor TProducerConsumerQueue.Create(Capacity: Integer);
begin
inherited Create;
FCapacity := Capacity;
SetLength(FItems, Capacity);
FHead := 0;
FTail := 0;
FCount := 0;
FLock := TCriticalSection.Create;
end;
destructor TProducerConsumerQueue.Destroy;
begin
FLock.Free;
inherited;
end;
function TProducerConsumerQueue.Enqueue(const Item: string): Boolean;
begin
FLock.Enter;
try
if FCount >= FCapacity then
Exit(False); { Queue full }
FItems[FTail] := Item;
FTail := (FTail + 1) mod FCapacity;
Inc(FCount);
Result := True;
finally
FLock.Leave;
end;
end;
function TProducerConsumerQueue.Dequeue(out Item: string): Boolean;
begin
FLock.Enter;
try
if FCount = 0 then
Exit(False); { Queue empty }
Item := FItems[FHead];
FItems[FHead] := ''; { Clear reference for garbage collection }
FHead := (FHead + 1) mod FCapacity;
Dec(FCount);
Result := True;
finally
FLock.Leave;
end;
end;
function TProducerConsumerQueue.Count: Integer;
begin
FLock.Enter;
try
Result := FCount;
finally
FLock.Leave;
end;
end;
function TProducerConsumerQueue.IsFull: Boolean;
begin
FLock.Enter;
try
Result := FCount >= FCapacity;
finally
FLock.Leave;
end;
end;
function TProducerConsumerQueue.IsEmpty: Boolean;
begin
FLock.Enter;
try
Result := FCount = 0;
finally
FLock.Leave;
end;
end;
This is a circular buffer (also called a ring buffer): FHead points to the next item to dequeue, FTail points to the next slot for enqueuing, and both wrap around using modular arithmetic. The buffer has a fixed capacity, which prevents unbounded memory growth if the producer is faster than the consumer.
The producer thread adds items to the queue:
procedure TProducer.Execute;
var
ItemNum: Integer;
begin
ItemNum := 0;
while not Terminated do
begin
Inc(ItemNum);
if FQueue.Enqueue(Format('Item-%d', [ItemNum])) then
{ Item was added }
else
Sleep(10); { Queue full — wait briefly }
end;
end;
The consumer thread removes items from the queue:
procedure TConsumer.Execute;
var
Item: string;
begin
while not Terminated do
begin
if FQueue.Dequeue(Item) then
ProcessItem(Item)
else
Sleep(10); { Queue empty — wait briefly }
end;
end;
The Sleep(10) calls when the queue is full or empty are a simple form of backpressure: the thread yields the CPU briefly rather than spinning in a tight loop. A more sophisticated implementation would use a TSimpleEvent to signal when items are available, eliminating the polling entirely.
Race Condition Demonstration: Before and After
To make the danger concrete, here is a complete program that demonstrates a race condition and then fixes it:
program RaceConditionDemo;
{$mode objfpc}{$H+}
uses
Classes, SysUtils, SyncObjs;
var
Counter: Integer = 0;
Lock: TCriticalSection;
type
TUnsafeIncrementor = class(TThread)
protected
procedure Execute; override;
end;
TSafeIncrementor = class(TThread)
protected
procedure Execute; override;
end;
procedure TUnsafeIncrementor.Execute;
var I: Integer;
begin
for I := 1 to 1000000 do
Inc(Counter); { NOT thread-safe! }
end;
procedure TSafeIncrementor.Execute;
var I: Integer;
begin
for I := 1 to 1000000 do
begin
Lock.Enter;
try
Inc(Counter);
finally
Lock.Leave;
end;
end;
end;
var
T1, T2: TThread;
begin
{ Unsafe version }
Counter := 0;
T1 := TUnsafeIncrementor.Create(False);
T2 := TUnsafeIncrementor.Create(False);
T1.WaitFor; T2.WaitFor;
T1.Free; T2.Free;
WriteLn('Unsafe result (should be 2000000): ', Counter);
{ Safe version }
Counter := 0;
Lock := TCriticalSection.Create;
try
T1 := TSafeIncrementor.Create(False);
T2 := TSafeIncrementor.Create(False);
T1.WaitFor; T2.WaitFor;
T1.Free; T2.Free;
WriteLn('Safe result (should be 2000000): ', Counter);
finally
Lock.Free;
end;
end.
Run this several times. The unsafe version will produce a different (wrong) result each time. The safe version will always produce exactly 2,000,000. The performance difference is noticeable — the locked version is slower because of the locking overhead — but correctness always trumps performance. A fast program that produces wrong results is worse than a slow program that produces right results.
Lock Granularity
How much code should a lock protect? This is a trade-off:
Coarse-grained locking: One lock for the entire data structure. Simple but creates contention — threads block each other even when accessing different parts of the structure.
Fine-grained locking: Separate locks for different parts. Less contention but more complex — you must ensure you do not create deadlocks when acquiring multiple locks.
For most applications, start with coarse-grained locking. If profiling reveals that lock contention is a bottleneck, then consider finer granularity.
36.5 Thread-Safe Data Structures
A critical section protects a block of code. But in practice, you usually want to protect a data structure — a list, a queue, a dictionary, a counter. The most maintainable approach is to build thread safety directly into the data structure, so that every operation is automatically synchronized. Users of the data structure do not need to worry about locking — the locking is internal.
This approach has a name: encapsulated synchronization (or sometimes "monitor-style" objects, after C.A.R. Hoare's monitor concept). The lock lives inside the object, and every public method acquires the lock before accessing internal state. This is the same information-hiding principle from Chapter 33, applied to concurrency: hide the synchronization mechanism from the user, just as you hide implementation details behind a unit interface.
The alternative — external locking, where the caller must acquire a lock before calling methods — is error-prone. If the caller forgets to lock, the bug is a race condition that might not manifest for weeks. With encapsulated synchronization, forgetting to lock is impossible because the locking happens automatically.
Let us create a thread-safe list that multiple threads can read from and write to simultaneously:
type
TThreadSafeList = class
private
FItems: array of string;
FCount: Integer;
FLock: TCriticalSection;
public
constructor Create;
destructor Destroy; override;
procedure Add(const Item: string);
function Get(Index: Integer): string;
function Count: Integer;
procedure Clear;
end;
constructor TThreadSafeList.Create;
begin
inherited;
FCount := 0;
SetLength(FItems, 16);
FLock := TCriticalSection.Create;
end;
destructor TThreadSafeList.Destroy;
begin
FLock.Free;
inherited;
end;
procedure TThreadSafeList.Add(const Item: string);
begin
FLock.Enter;
try
if FCount >= Length(FItems) then
SetLength(FItems, Length(FItems) * 2);
FItems[FCount] := Item;
Inc(FCount);
finally
FLock.Leave;
end;
end;
function TThreadSafeList.Get(Index: Integer): string;
begin
FLock.Enter;
try
if (Index < 0) or (Index >= FCount) then
raise ERangeError.CreateFmt('Index %d out of range (0..%d)', [Index, FCount - 1]);
Result := FItems[Index];
finally
FLock.Leave;
end;
end;
function TThreadSafeList.Count: Integer;
begin
FLock.Enter;
try
Result := FCount;
finally
FLock.Leave;
end;
end;
procedure TThreadSafeList.Clear;
begin
FLock.Enter;
try
FCount := 0;
finally
FLock.Leave;
end;
end;
💡 The Snapshot Pattern Even with a thread-safe list, iterating over it can be problematic — the list might change between reading Count and reading individual items. A safer approach is the snapshot pattern: lock once, copy the data, unlock, then iterate over the copy. The copy is a private snapshot that no other thread can modify.
36.6 The Main Thread and GUI Updates
In Lazarus (and most GUI frameworks), there is one absolute rule:
Never access GUI controls from a worker thread.
GUI toolkits are not thread-safe. Updating a label, adding a row to a grid, or changing a progress bar from a worker thread can corrupt internal data structures, cause visual glitches, or crash the application. All GUI updates must happen in the main thread.
The pattern:
type
TSyncWorker = class(TThread)
private
FProgressText: string;
procedure DoUpdateProgress;
protected
procedure Execute; override;
end;
procedure TSyncWorker.DoUpdateProgress;
begin
{ This runs in the main thread — safe to update GUI }
{ MainForm.StatusBar.SimpleText := FProgressText; }
WriteLn(FProgressText); { Console equivalent }
end;
procedure TSyncWorker.Execute;
var
I: Integer;
begin
for I := 1 to 100 do
begin
if Terminated then Exit;
{ Do actual work }
Sleep(50);
{ Update GUI safely }
FProgressText := Format('Processing step %d of 100...', [I]);
Synchronize(@DoUpdateProgress);
end;
FProgressText := 'Processing complete.';
Synchronize(@DoUpdateProgress);
end;
For console applications (like our examples), WriteLn is generally safe from multiple threads because the RTL uses internal locking. But for GUI applications, the Synchronize/Queue pattern is mandatory.
The GUI Responsiveness Pattern
The most common use of threading in desktop applications is keeping the GUI responsive during long operations. The pattern is always the same:
- User clicks a button ("Sync", "Export", "Calculate")
- The button handler creates a worker thread and starts it
- The button handler returns immediately — the GUI stays responsive
- The worker thread does the long operation, periodically updating progress via
Queue - When done, the worker notifies the main thread via
OnTerminateor a finalQueuecall
Here is the complete pattern as used in a Lazarus form:
type
TExportWorker = class(TThread)
private
FExpenses: TExpenseArray;
FFilename: string;
FProgressPct: Integer;
FStatusText: string;
procedure DoUpdateProgress;
procedure DoExportComplete;
protected
procedure Execute; override;
public
constructor Create(const Expenses: TExpenseArray; const AFilename: string);
property ProgressPct: Integer read FProgressPct;
property StatusText: string read FStatusText;
end;
constructor TExportWorker.Create(const Expenses: TExpenseArray;
const AFilename: string);
begin
inherited Create(True);
FExpenses := Expenses;
FFilename := AFilename;
FreeOnTerminate := True;
end;
procedure TExportWorker.DoUpdateProgress;
begin
{ Runs in main thread — safe to update GUI }
WriteLn(Format('[%d%%] %s', [FProgressPct, FStatusText]));
{ In a real Lazarus app: MainForm.ProgressBar.Position := FProgressPct; }
{ MainForm.StatusBar.SimpleText := FStatusText; }
end;
procedure TExportWorker.DoExportComplete;
begin
WriteLn('Export complete: ', FFilename);
{ MainForm.ShowMessage('Export complete!'); }
end;
procedure TExportWorker.Execute;
var
I: Integer;
begin
FStatusText := 'Starting export...';
FProgressPct := 0;
Queue(@DoUpdateProgress);
for I := 0 to High(FExpenses) do
begin
if Terminated then
begin
FStatusText := 'Export cancelled.';
Queue(@DoUpdateProgress);
Exit;
end;
{ Process one expense }
{ ... (write to file) ... }
Sleep(5); { Simulate work }
{ Update progress every 10% }
if ((I + 1) * 100 div Length(FExpenses)) > FProgressPct + 10 then
begin
FProgressPct := (I + 1) * 100 div Length(FExpenses);
FStatusText := Format('Exporting: %d of %d', [I + 1, Length(FExpenses)]);
Queue(@DoUpdateProgress); { Non-blocking update }
end;
end;
FProgressPct := 100;
Queue(@DoExportComplete);
end;
The key insight is that Queue is non-blocking. The worker thread fires off a GUI update request and immediately continues working. The main thread processes the update on its next message loop iteration. This keeps both the worker and the GUI responsive.
Cancellation Pattern
Notice the Terminated check inside the loop. The main thread can cancel the export by calling Worker.Terminate. The worker checks this flag on every iteration and exits cleanly if cancellation is requested. This is cooperative cancellation — the worker is not killed; it chooses to stop.
To support a "Cancel" button in the GUI:
procedure TMainForm.CancelButtonClick(Sender: TObject);
begin
if FExportWorker <> nil then
FExportWorker.Terminate;
end;
The worker thread will notice Terminated = True on its next loop iteration, clean up, and exit. The OnTerminate event (or a final Queue call) notifies the main thread that cancellation is complete.
36.7 Thread Pools and Work Queues
Creating a new thread for every task is expensive. Each thread requires stack space (typically 1 MB on 64-bit systems), kernel resources (a thread handle, scheduling data structures), and initialization time (the OS must set up the thread's execution context). For a long-running task — downloading a large file, computing a report — this overhead is negligible compared to the work. But for short tasks — handling an HTTP request (100ms), processing a log entry (1ms), checking a file (0.1ms) — the overhead can exceed the work itself.
Imagine MicroServe receiving 100 HTTP requests per second. If it creates a new thread for each request, it is creating 100 threads per second — 100 MB of stack space, 100 kernel handles, 100 setup/teardown cycles. This is wasteful and eventually crashes the system (every OS has a maximum thread count).
A thread pool is the solution. It creates a fixed set of worker threads at startup and reuses them for many tasks. Instead of creating a thread per task, you submit tasks to a shared queue, and the workers pick them up. When a worker finishes one task, it picks up the next. The threads are created once and destroyed when the pool shuts down.
This pattern is fundamental to server software. Apache, Nginx, database connection pools, Java's ExecutorService, Go's goroutine scheduler, and Python's ThreadPoolExecutor all use variations of this pattern. Instead of creating a thread per task, you submit tasks to the pool and the workers pick them up:
type
TWorkItem = record
Description: string;
Data: Integer;
end;
TWorkQueue = class
private
FItems: array of TWorkItem;
FHead, FTail, FCount: Integer;
FCapacity: Integer;
FLock: TCriticalSection;
public
constructor Create(Capacity: Integer);
destructor Destroy; override;
procedure Enqueue(const Item: TWorkItem);
function Dequeue(out Item: TWorkItem): Boolean;
function IsEmpty: Boolean;
end;
TWorkerThread = class(TThread)
private
FQueue: TWorkQueue;
FWorkerID: Integer;
protected
procedure Execute; override;
public
constructor Create(Queue: TWorkQueue; WorkerID: Integer);
end;
constructor TWorkerThread.Create(Queue: TWorkQueue; WorkerID: Integer);
begin
inherited Create(False); { Start immediately }
FQueue := Queue;
FWorkerID := WorkerID;
FreeOnTerminate := False;
end;
procedure TWorkerThread.Execute;
var
Item: TWorkItem;
begin
while not Terminated do
begin
if FQueue.Dequeue(Item) then
begin
{ Process the work item }
WriteLn(Format('Worker %d processing: %s (data=%d)',
[FWorkerID, Item.Description, Item.Data]));
Sleep(100 + Random(200)); { Simulate variable work }
end
else
Sleep(10); { Queue empty, brief wait before checking again }
end;
end;
The main thread creates the pool and submits work:
var
Queue: TWorkQueue;
Workers: array[0..3] of TWorkerThread;
Item: TWorkItem;
I: Integer;
begin
Queue := TWorkQueue.Create(100);
{ Create worker pool }
for I := 0 to 3 do
Workers[I] := TWorkerThread.Create(Queue, I);
{ Submit work items }
for I := 1 to 20 do
begin
Item.Description := Format('Task %d', [I]);
Item.Data := I * 10;
Queue.Enqueue(Item);
end;
{ Wait for all tasks to be processed }
Sleep(5000);
{ Shut down workers }
for I := 0 to 3 do
Workers[I].Terminate;
for I := 0 to 3 do
begin
Workers[I].WaitFor;
Workers[I].Free;
end;
Queue.Free;
end.
The thread pool pattern is used extensively in server software (MicroServe could use a pool to handle multiple HTTP requests concurrently), database connection pools, and any application that needs to process a stream of tasks efficiently.
MicroServe with a Thread Pool
Recall MicroServe from Chapter 35 — it was single-threaded, handling one request at a time. With a thread pool, it can handle multiple simultaneous requests:
type
THTTPWorkItem = record
Description: string;
Data: Integer;
ClientStream: TSocketStream;
end;
procedure THTTPWorker.Execute;
var
Item: THTTPWorkItem;
begin
while not Terminated do
begin
if FQueue.Dequeue(Item) then
begin
try
HandleClient(Item.ClientStream); { Process the HTTP request }
except
on E: Exception do
WriteLn(Format('Worker %d error: %s', [FWorkerID, E.Message]));
end;
end
else
Sleep(5); { Brief wait when no requests }
end;
end;
The main server loop accepts connections and submits them to the pool:
Server.OnConnect := procedure(Sender: TObject; Data: TSocketStream)
begin
Item.ClientStream := Data;
WorkQueue.Enqueue(Item);
end;
Now MicroServe can handle four (or eight, or sixteen) simultaneous HTTP requests. The pool size should match the expected concurrency: for a local development server, 4 workers is plenty. For a production server, use one worker per CPU core for CPU-bound work, or more for I/O-bound work (like database queries or proxy requests).
36.8 Common Threading Bugs
Threading bugs are the hardest bugs in software engineering. They are the bugs that senior developers with twenty years of experience still make. They are the bugs that pass testing a thousand times and then crash in production on a Saturday night. They are the bugs that disappear when you add a WriteLn to debug them (because the WriteLn changes the timing). They are the bugs that make experienced programmers say "I should have used a single thread."
Understanding these bugs — their causes, their symptoms, and their prevention — is essential for anyone writing concurrent code. The goal is not just to fix them when they appear, but to write code that prevents them from occurring in the first place.
Race Conditions
We already saw a race condition in Section 36.3 — two threads incrementing a counter without locking. The result depends on which thread's read-modify-write completes first.
How to fix: Protect shared mutable data with a critical section or other synchronization primitive.
How to detect: Code review (look for shared variables accessed without locks), testing under load (race conditions are more likely to manifest under heavy concurrent access), and tools like ThreadSanitizer.
Deadlocks
A deadlock occurs when two threads are each waiting for the other to release a lock:
Thread 1: Lock(A), then tries to Lock(B) — but Thread 2 holds B
Thread 2: Lock(B), then tries to Lock(A) — but Thread 1 holds A
Both threads wait forever. The program hangs.
The classic illustration is the Dining Philosophers Problem. Five philosophers sit at a circular table. Between each pair of philosophers is a single fork. Each philosopher needs two forks to eat. If every philosopher picks up their left fork first, they all hold one fork and wait forever for the second — deadlock. The solution is to impose an ordering: always pick up the lower-numbered fork first. This eliminates the circular wait.
In code terms:
{ DEADLOCK-PRONE: each thread acquires locks in different order }
{ Thread 1: }
LockA.Enter;
LockB.Enter; { Waits if Thread 2 holds B }
{ Thread 2: }
LockB.Enter;
LockA.Enter; { Waits if Thread 1 holds A → DEADLOCK }
{ DEADLOCK-FREE: both threads acquire locks in the same order }
{ Thread 1: }
LockA.Enter;
LockB.Enter;
{ Thread 2: }
LockA.Enter; { Waits until Thread 1 releases A }
LockB.Enter; { Then acquires B }
How to fix: Always acquire locks in the same order. If every thread acquires A before B, the deadlock cannot occur. This is the lock ordering strategy. In PennyWise, if you have a database lock and a sync lock, always acquire the database lock first.
How to detect: If your program hangs (stops responding without crashing), it is likely a deadlock. Attach a debugger and examine what each thread is waiting for. On Linux, sending SIGQUIT to the process produces a thread dump. On Windows, attach Visual Studio or WinDbg and inspect the thread states.
Starvation
Starvation occurs when a thread is perpetually denied access to a resource because other threads keep taking it first. This is less common than race conditions and deadlocks but can happen with unfair scheduling.
How to fix: Use fair locking primitives (most OS-level mutexes are fair by default), limit how long any thread holds a lock, and ensure that high-priority work does not permanently block low-priority work.
Priority Inversion
Priority inversion is a subtle problem where a high-priority thread is blocked by a low-priority thread that holds a lock. Consider three threads: High (priority 3), Medium (priority 2), and Low (priority 1). Low acquires a lock. High tries to acquire the same lock and blocks. Now Medium (which does not need the lock) runs instead of Low, because Medium has higher priority. High is waiting for Low to finish, but Low cannot run because Medium is using the CPU. High is effectively blocked by Medium, which has lower priority — hence "inversion."
The solution is priority inheritance: when a high-priority thread blocks on a lock held by a low-priority thread, the low-priority thread temporarily inherits the higher priority. This ensures it finishes quickly and releases the lock. Modern operating systems implement priority inheritance for mutexes, but critical sections may not support it. For PennyWise's background sync, priority inversion is unlikely (there are only two threads), but it is important to understand for server software with many threads at different priorities.
InterlockedIncrement: Lock-Free Atomics
For simple operations like incrementing a counter, a full critical section is overkill. Free Pascal provides interlocked functions that perform atomic operations without locks:
uses
SysUtils;
var
Counter: Integer = 0;
procedure ThreadSafeIncrement;
begin
InterlockedIncrement(Counter); { Atomic — no lock needed }
end;
procedure ThreadSafeDecrement;
begin
InterlockedDecrement(Counter);
end;
function ThreadSafeRead: Integer;
begin
Result := InterlockedCompareExchange(Counter, 0, 0);
end;
Interlocked operations use CPU-level atomic instructions (like LOCK INC on x86) that guarantee atomicity without the overhead of a kernel-level lock. They are significantly faster than critical sections for simple operations but can only protect individual values — not multi-step operations on complex data structures.
Use interlocked operations for: simple counters, boolean flags, reference counts. Use critical sections for: anything more complex (reading and modifying multiple fields, iterating a collection, multi-step operations).
The ABA Problem
Thread 1 reads a value A. Thread 2 changes it to B, then back to A. Thread 1 checks and sees A — "nothing changed!" — but the intermediate state B may have invalidated an assumption Thread 1 made.
How to fix: Use version counters or timestamps alongside the data value. Instead of checking "is the value the same?", check "is the version the same?"
Debugging Tips
- Add logging. Print thread IDs and timestamps before and after lock acquisitions. This creates a timeline you can analyze.
- Simplify first. Reproduce the bug with the minimum number of threads and the simplest possible shared data.
- Stress test. Run the program with more threads than cores, for longer than normal, with random delays. This makes timing-dependent bugs more likely to manifest.
- Review lock patterns. Every access to shared mutable data must be protected. No exceptions.
- Use try-finally. Every
Lock.Entermust have a matchingLock.Leavein afinallyblock.
⚠️ The Humility Principle Even experienced concurrent programmers make threading mistakes. The difficulty is inherent — humans do not think naturally about interleaved execution. The defense is discipline: follow the patterns rigidly, review carefully, test extensively, and never assume that a race condition "probably won't happen."
36.9 Project Checkpoint: PennyWise Background Sync
Everything in this chapter has been building toward this moment. PennyWise gains a background sync thread that periodically pushes unsaved expenses to the server without freezing the GUI. This is not a toy example — it is the exact pattern used by Dropbox (background file sync), Slack (background message fetch), Gmail (background email check), and VS Code (background extension updates). The user works in the foreground; the system synchronizes in the background. If the network is down, changes queue up and sync when connectivity returns.
The background sync combines every concept from this chapter: thread creation (TThread), cooperative termination (Terminated), shared data protection (TCriticalSection), GUI-safe updates (Queue), and the work queue pattern (pending expenses waiting to be synced). It also connects directly to Chapter 35's REST client — the sync thread makes HTTP POST requests to push expenses to the server.
The Background Sync Thread
unit FinanceSyncThread;
{$mode objfpc}{$H+}
interface
uses
Classes, SysUtils, SyncObjs, FinanceCore;
type
TSyncStatus = (ssIdle, ssSyncing, ssError, ssOffline);
TBackgroundSync = class(TThread)
private
FServerURL: string;
FPendingExpenses: TExpenseArray;
FLock: TCriticalSection;
FStatus: TSyncStatus;
FStatusMessage: string;
FSyncInterval: Integer; { seconds }
FOnStatusChange: TNotifyEvent;
procedure DoStatusChange;
procedure SyncPending;
protected
procedure Execute; override;
public
constructor Create(const ServerURL: string; SyncInterval: Integer = 30);
destructor Destroy; override;
procedure QueueExpense(const E: TExpense);
function GetStatus: TSyncStatus;
function GetStatusMessage: string;
function GetPendingCount: Integer;
property OnStatusChange: TNotifyEvent read FOnStatusChange write FOnStatusChange;
end;
implementation
uses
fphttpclient, fpjson;
constructor TBackgroundSync.Create(const ServerURL: string; SyncInterval: Integer);
begin
inherited Create(True);
FServerURL := ServerURL;
FSyncInterval := SyncInterval;
FLock := TCriticalSection.Create;
FStatus := ssIdle;
FStatusMessage := 'Ready';
SetLength(FPendingExpenses, 0);
FreeOnTerminate := False;
end;
destructor TBackgroundSync.Destroy;
begin
FLock.Free;
inherited;
end;
procedure TBackgroundSync.QueueExpense(const E: TExpense);
begin
FLock.Enter;
try
SetLength(FPendingExpenses, Length(FPendingExpenses) + 1);
FPendingExpenses[High(FPendingExpenses)] := E;
finally
FLock.Leave;
end;
end;
function TBackgroundSync.GetStatus: TSyncStatus;
begin
FLock.Enter;
try
Result := FStatus;
finally
FLock.Leave;
end;
end;
function TBackgroundSync.GetStatusMessage: string;
begin
FLock.Enter;
try
Result := FStatusMessage;
finally
FLock.Leave;
end;
end;
function TBackgroundSync.GetPendingCount: Integer;
begin
FLock.Enter;
try
Result := Length(FPendingExpenses);
finally
FLock.Leave;
end;
end;
procedure TBackgroundSync.DoStatusChange;
begin
if Assigned(FOnStatusChange) then
FOnStatusChange(Self);
end;
procedure TBackgroundSync.SyncPending;
var
ToSync: TExpenseArray;
Client: TFPHTTPClient;
Body: TJSONObject;
RequestStream, ResponseStream: TStringStream;
I, SuccessCount: Integer;
begin
{ Take a snapshot of pending items }
FLock.Enter;
try
if Length(FPendingExpenses) = 0 then Exit;
ToSync := Copy(FPendingExpenses, 0, Length(FPendingExpenses));
SetLength(FPendingExpenses, 0); { Clear the queue }
FStatus := ssSyncing;
FStatusMessage := Format('Syncing %d expenses...', [Length(ToSync)]);
finally
FLock.Leave;
end;
if Assigned(FOnStatusChange) then
Queue(@DoStatusChange);
SuccessCount := 0;
Client := TFPHTTPClient.Create(nil);
try
Client.AddHeader('Content-Type', 'application/json');
Client.IOTimeout := 10000;
for I := 0 to High(ToSync) do
begin
if Terminated then Break;
Body := TJSONObject.Create;
try
Body.Add('description', ToSync[I].Description);
Body.Add('amount', Double(ToSync[I].Amount));
Body.Add('category', CategoryToStr(ToSync[I].Category));
Body.Add('date', FormatDateTime('yyyy-mm-dd', ToSync[I].ExpenseDate));
RequestStream := TStringStream.Create(Body.AsJSON);
ResponseStream := TStringStream.Create('');
try
Client.RequestBody := RequestStream;
try
Client.Post(FServerURL + '/api/expenses', ResponseStream);
if Client.ResponseStatusCode in [200, 201] then
Inc(SuccessCount);
except
{ Re-queue failed items }
QueueExpense(ToSync[I]);
end;
finally
ResponseStream.Free;
RequestStream.Free;
end;
finally
Body.Free;
end;
end;
finally
Client.Free;
end;
FLock.Enter;
try
if SuccessCount = Length(ToSync) then
begin
FStatus := ssIdle;
FStatusMessage := Format('Synced %d expenses', [SuccessCount]);
end
else
begin
FStatus := ssError;
FStatusMessage := Format('Synced %d of %d (retrying %d)',
[SuccessCount, Length(ToSync), Length(ToSync) - SuccessCount]);
end;
finally
FLock.Leave;
end;
if Assigned(FOnStatusChange) then
Queue(@DoStatusChange);
end;
procedure TBackgroundSync.Execute;
var
WaitCount: Integer;
begin
while not Terminated do
begin
SyncPending;
{ Wait for the next sync interval, checking Terminated periodically }
WaitCount := 0;
while (not Terminated) and (WaitCount < FSyncInterval * 10) do
begin
Sleep(100);
Inc(WaitCount);
end;
end;
end;
end.
Thread Safety Analysis
Let us verify the thread safety of this design:
| Shared Data | Protected By | Threads That Access |
|---|---|---|
FPendingExpenses |
FLock |
Main (QueueExpense), Worker (SyncPending) |
FStatus |
FLock |
Main (GetStatus), Worker (SyncPending) |
FStatusMessage |
FLock |
Main (GetStatusMessage), Worker (SyncPending) |
FOnStatusChange |
Read-only after setup | Main (sets it), Worker (reads it) |
FServerURL |
Read-only | Worker only |
All shared mutable data is protected by the critical section. The FOnStatusChange event is set once by the main thread before the worker starts, and only read (never written) by the worker afterward — this is safe without locking.
What Rosa Experienced
Rosa enters an expense: "Coffee at Blue Bottle, $5.50, Food." PennyWise saves it locally and queues it for sync. The background thread picks it up within thirty seconds and pushes it to the server. The status bar shows "Syncing 1 expense..." and then "Synced 1 expense."
If the network is down, the status shows "Error — retrying." The expense stays in the queue and is retried on the next sync cycle. Rosa never has to think about it — the sync happens automatically, silently, in the background.
What Rosa Experienced
Rosa enters an expense: "Coffee at Blue Bottle, $5.50, Food." PennyWise saves it locally and queues it for sync. The background thread picks it up within thirty seconds and pushes it to the server. The status bar shows "Syncing 1 expense..." briefly, then returns to "Last sync: 14:30:05."
If the network is down, something different happens. The status shows "Sync error — will retry." The expense stays in the pending queue. The next sync cycle (30 seconds later) tries again. If the network is still down, it retries again. When connectivity returns, all pending expenses sync in the next cycle. Rosa never needs to think about it — the sync happens automatically, silently, in the background.
"How does it know when the network comes back?" Rosa asks.
"It doesn't," Tomás explains. "It just keeps trying every thirty seconds. When a try succeeds, it syncs everything that has accumulated. It is the simplest possible strategy, and it works for our use case. If we needed instant notification of network recovery, we would need to listen for network change events from the operating system — but that is a lot of complexity for very little benefit."
This illustrates an important design principle in concurrent programming: simple strategies that retry are often better than complex strategies that try to be clever. The thirty-second polling loop is easy to understand, easy to debug, and handles all failure modes gracefully. A more sophisticated approach (event-driven network monitoring, exponential backoff, circuit breakers) would be warranted for a high-frequency trading system, but for a personal finance app, polling is perfect.
Performance Considerations for Threading
Threading is not free. Every thread consumes memory (stack space, typically 1 MB per thread), OS resources (kernel thread structures), and CPU time (context switching between threads). For PennyWise's background sync, one additional thread is negligible. But for MicroServe handling hundreds of connections, or for a parallel file processor, the number of threads matters.
Rule of thumb for CPU-bound work: Use one thread per CPU core. More threads than cores just means the OS spends time switching between them. If you have 4 cores, use 4 worker threads.
Rule of thumb for I/O-bound work (network, disk): You can use more threads than cores, because threads spend most of their time waiting for I/O. While one thread waits for a network response, another thread can use the CPU. 50-100 threads for I/O-bound work is reasonable.
Rule of thumb for memory: Each thread's stack is 1 MB by default. 1,000 threads = 1 GB of stack memory alone. If you need more concurrency than that, use a thread pool with a work queue (Section 36.7) or asynchronous I/O.
36.10 Summary
This chapter introduced multithreading and concurrent programming in Pascal.
Why multithreading: To keep applications responsive (the GUI does not freeze during long operations), to utilize multiple CPU cores, and to handle multiple simultaneous tasks (network connections, file processing).
TThread: Create a subclass, override Execute, create suspended, call Start. Use the Terminated property for cooperative shutdown. Set FreeOnTerminate := True for fire-and-forget threads, or call WaitFor + Free for managed threads.
Synchronize and Queue: The only safe way to update GUI controls from a worker thread. Synchronize blocks the worker until the main thread executes the method. Queue is non-blocking — the method is executed later.
Critical sections: TCriticalSection ensures mutual exclusion — only one thread can be inside the protected block at a time. Always use try..finally to guarantee the lock is released. Every access to shared mutable data must be protected.
Thread-safe data structures encapsulate a lock within the data structure, ensuring that all operations are synchronized automatically. The snapshot pattern (lock, copy, unlock, iterate the copy) is useful for iteration.
Thread pools and work queues avoid the overhead of creating threads for every task. A fixed set of workers processes tasks from a shared queue. This pattern is used by servers, databases, and batch processing systems.
Common bugs: Race conditions (unprotected shared data), deadlocks (circular lock dependencies), starvation (unfair resource access), and the ABA problem (intermediate state changes). Prevention: protect all shared mutable data, always acquire locks in the same order, use try-finally for lock release, and test under heavy load.
PennyWise gained a TBackgroundSync thread that periodically pushes queued expenses to the server. The thread uses a critical section to protect the pending queue and status, Queue for non-blocking GUI updates, and cooperative termination via Terminated.
Concurrent programming is difficult, and Pascal does not pretend otherwise. Its explicit memory model, lack of garbage collection, and strong type system force you to think clearly about data ownership and thread safety. These are the right habits — and they transfer directly to C, C++, Rust, Go, and every other language where concurrency matters.