In the previous section (the Web Client), we created a new thread for every download. This works great until those threads need to update a shared variable, like a counter tracking how many downloads are finished.

This section introduces the most fundamental concept in concurrent programming: ensuring that only one thread accesses a critical piece of data at a time.

Core Objective: Solving the Race Condition

The book highlights a specific danger in the Web Client code. We have global variables (like nconn for active connections) that multiple threads try to update simultaneously.

1. The “Race Condition” Explained

Stevens provides a classic explanation of why simply writing nconn-- in C is dangerous. It looks like one instruction, but at the machine level (assembly), it is actually three separate steps:

  1. Load the value of nconn from memory into a register.
  2. Subtract 1 from the register.
  3. Store the new value from the register back into memory.

The Disaster Scenario: Imagine nconn is 2.

  1. Thread A loads 2 into its register.
  2. Interrupt! The OS pauses Thread A and switches to Thread B.
  3. Thread B loads 2 (because Thread A hasn’t saved the change yet).
  4. Thread B subtracts 1 and stores 1.
  5. Interrupt! The OS switches back to Thread A.
  6. Thread A (resuming) subtracts 1 from its register (which still holds 2) and stores 1.

Result: Two threads finished, but the counter only went down by 1. We “lost” a completion. This is a Race Condition.

2. The Solution: Mutexes

A Mutex (Mutual Exclusion) is like a lock on a door.

  • Lock: Before touching the shared counter, a thread must “lock” the mutex. If it’s already locked by someone else, the thread waits (blocks) until it unlocks.
  • Unlock: After updating the counter, the thread “unlocks” the mutex, letting the next waiting thread in.

The Implementation The book introduces the Pthreads API for this:

A. Declaration & Initialization You can initialize a mutex statically (if it’s a global variable):

pthread_mutex_t counter_mutex = PTHREAD_MUTEX_INITIALIZER;

B. Locking and Unlocking To safely change the shared variables, we wrap the code in lock/unlock calls:

/* This is the Critical Region */
Pthread_mutex_lock(&counter_mutex);
 
nconn--;        // Safe to modify now
nlefttoread--;  // Safe to modify now
 
Pthread_mutex_unlock(&counter_mutex);
3. Key Rules for Mutexes

The text emphasizes a few best practices:

  1. Granularity: Don’t lock too much. If you lock the entire function, you destroy the benefits of parallelism (only one thread runs at a time). Only lock the specific lines that touch shared data.
  2. Deadlocks: If you need to acquire multiple mutexes (e.g., Mutex A and Mutex B), you must always acquire them in the same order in every thread. If Thread 1 grabs A then B, and Thread 2 grabs B then A, they can get stuck waiting for each other forever.