Chapter 36 Exercises: Multithreading and Concurrent Programming


Part A: Conceptual Understanding

A.1. Explain the difference between concurrency and parallelism. Can you have concurrency without parallelism? Can you have parallelism without concurrency?

Guidance Concurrency means managing multiple tasks that can make progress — they may or may not run at the same time. Parallelism means tasks literally execute simultaneously on different CPU cores. You can have concurrency without parallelism: a single-core CPU switches between threads (time-slicing). You cannot have parallelism without concurrency: parallel tasks are inherently concurrent. On modern multi-core machines, concurrent threads often achieve true parallelism.

A.2. What is a race condition? Give a specific example (different from the chapter) and explain how a critical section prevents it.

Guidance A race condition occurs when the program's correctness depends on the relative timing of threads accessing shared data. Example: two threads adding items to a shared dynamic array — Thread 1 reads Count=5, Thread 2 reads Count=5, Thread 1 writes to index 5 and sets Count=6, Thread 2 writes to index 5 (overwriting Thread 1's data) and sets Count=6. One item is lost. A critical section prevents this by ensuring only one thread executes the add-and-increment sequence at a time.

A.3. Why must GUI updates always happen in the main thread? What are the two mechanisms Pascal provides for this, and how do they differ?

Guidance GUI frameworks (Lazarus/LCL, Windows GDI, GTK) are not thread-safe — their internal data structures can be corrupted by concurrent access. Pascal provides Synchronize (blocks the worker thread until the main thread executes the method — safe but slow) and Queue (non-blocking — queues the method for later execution by the main thread). Use Synchronize when you need the result; use Queue for fire-and-forget updates.

A.4. Describe a deadlock scenario involving two critical sections. Then explain the "lock ordering" strategy for preventing it.

Guidance Scenario: Thread 1 acquires Lock A, then tries to acquire Lock B. Thread 2 acquires Lock B, then tries to acquire Lock A. Both threads are blocked waiting for each other — deadlock. Lock ordering solution: define a global order (A before B). All threads must acquire A before B, never B before A. Since Thread 2 cannot hold B without first holding A, and Thread 1 already holds A, Thread 2 waits for A first. No circular wait occurs.

A.5. What does FreeOnTerminate := True do? When is it appropriate, and when is it dangerous?

Guidance When FreeOnTerminate is True, the thread object is automatically freed when Execute returns. Appropriate for fire-and-forget threads where you don't need the result and don't need to wait for completion. Dangerous when you need to call WaitFor (the object might be freed before you call it), when you need to read results from the thread after it finishes, or when another object holds a reference to the thread. If FreeOnTerminate is True, do not store or use the thread reference after calling Start.

Part B: Applied Analysis

B.1. You have a program that processes 1,000 image files (resizing each one). On a 4-core machine, design a threading strategy. How many threads would you create? How would you distribute the work? What shared data needs protection?

Guidance Create 4 worker threads (one per core — more threads than cores adds context-switching overhead without benefit for CPU-bound work). Distribution options: (1) Static partitioning — each thread gets 250 files. (2) Work queue — all 1,000 files go into a shared queue, each thread pulls the next file when ready (better load balancing if files vary in size). Shared data needing protection: the work queue (thread-safe enqueue/dequeue), a progress counter (atomic increment or locked), and the output directory (file writes are independent but directory creation needs synchronization).

B.2. Review the following code and identify all threading bugs:

var
  Data: TStringList;  { shared between threads }
  Processing: Boolean;

procedure TWorker.Execute;
begin
  Processing := True;
  while not Terminated do
  begin
    if Data.Count > 0 then
    begin
      ProcessItem(Data[0]);
      Data.Delete(0);
    end;
  end;
  Processing := False;
end;
Guidance Bugs: (1) Data.Count, Data[0], and Data.Delete(0) are not synchronized — another thread could modify Data between checking Count and accessing [0]. (2) Processing is a shared boolean with no synchronization — a classic race condition. (3) No protection against two worker threads processing the same item simultaneously. (4) TStringList is not thread-safe — concurrent access can corrupt its internal state. (5) No try-finally or exception handling. Fix: wrap all Data access in a critical section, use a proper thread-safe queue, and protect the Processing flag.

Part C: Code Exercises

C.1. Write a program that creates two threads: one prints "Tick" every 500ms, the other prints "Tock" every 700ms. Both run for 5 seconds, then the main thread terminates them and prints the total tick and tock counts. Use a thread-safe counter.

Guidance Create a TThreadSafeCounter (or use InterlockedIncrement). Each thread has a loop: print its word, increment its counter, Sleep for the interval, check Terminated. The main thread starts both, sleeps for 5 seconds, calls Terminate on both, WaitFor both, reads and displays the counter values, then frees everything.

C.2. Implement a thread-safe producer-consumer queue. One thread produces items (integers 1 to 100) and adds them to the queue. Two consumer threads remove items and print them. The program should terminate when all items have been consumed.

Guidance Create a TThreadSafeQueue with Enqueue and Dequeue (returns False if empty). The producer enqueues 100 items with brief delays. Each consumer loops: dequeue an item, process it (print), repeat until the queue is empty AND the producer is done. Use a shared "producerDone" flag (protected by a lock or atomic variable) so consumers know when to stop. Count total items consumed across both consumers to verify all 100 were processed.

C.3. Create a parallel file word counter. The program takes a list of text files, creates one thread per file (up to a maximum of 4 threads), and each thread counts the words in its file. The main thread collects results and prints a summary. Protect the results collection with a critical section.

Guidance Define a TWordCountThread that takes a filename, counts words (split by whitespace), and stores the count. Use a thread-safe results list or a critical section around result storage. The main thread creates threads (up to 4 at a time), waits for them, then reads results. If there are more than 4 files, wait for a batch to finish before starting the next.

Part D: Challenge Problems

D.1. Add multi-threading to MicroServe. The current server handles one request at a time. Modify it to create a new thread for each incoming connection, allowing multiple simultaneous requests. Test by sending 10 requests simultaneously (using 10 instances of the HTTP client). Measure the improvement in throughput.

Guidance In the OnConnect handler, instead of handling the request directly, create a TRequestHandler thread and pass it the TSocketStream. The thread handles the request and closes the connection. Protect shared data (the expenses array) with a critical section. For testing, create a client program that starts 10 threads, each making an HTTP request, and measures total time. Compare with the single-threaded version.

D.2. Implement a thread pool with a configurable number of workers. The pool should accept work items (procedure references or method pointers), queue them, and distribute them to available workers. Include proper shutdown: stop accepting new work, wait for all queued work to complete, then terminate workers.

Guidance Define TWorkProc = procedure of object (or use a record with a method pointer). TThreadPool has a TWorkQueue and an array of TWorkerThread. Submit() adds to the queue. Shutdown() sets a "shutting down" flag, waits for the queue to empty, terminates all workers, and waits for them to finish. Each worker loops: dequeue (blocking or polling), execute, repeat until terminated. This is a simplified version of Java's ExecutorService or Python's ThreadPoolExecutor.