This is one of the most classic concurrency patterns in computer science. It’s the perfect scenario to demonstrate why Mutexes are necessary and how they work in practice.

The Scenario (Figure 7.1)

We have a shared array of data (a buffer).

  • Producers (Threads): Multiple threads trying to write data into this array at the same time.
  • Consumer (Thread): A single thread that comes along later to check if the data is correct.

The Danger: If two producers try to write to buffer[index] at the exact same moment, they might overwrite each other or increment the index counter incorrectly (the race condition we discussed earlier).

1. The Shared Data Structure

To manage this cleanly, the book defines a struct that holds both the data and the synchronization primitive (the mutex) together. This is a best practice in C programming.

/* Globals shared between threads */
struct {
    pthread_mutex_t mutex;    // The lock
    int buff[MAXNITEMS];      // The shared data
    int nput;                 // Index for the next item to store
    int nval;                 // Next value to store
} shared = {
    PTHREAD_MUTEX_INITIALIZER // Initialize the mutex
};
  • mutex: The guard dog.
  • nput: The critical counter. This tells us where the next free slot is.
2. The Producer Logic (Figure 7.3)

The magic happens in the produce function. Every producer thread runs this loop. Look closely at where the lock is placed:

void *produce(void *arg)
{
    for ( ; ; ) {
        Pthread_mutex_lock(&shared.mutex); // <--- ENTER CRITICAL REGION
 
        if (shared.nput >= nitems) {
            Pthread_mutex_unlock(&shared.mutex); // Unlock before leaving!
            return(NULL);
        }
 
        shared.buff[shared.nput] = shared.nval;
        shared.nput++;
        shared.nval++;
 
        Pthread_mutex_unlock(&shared.mutex); // <--- EXIT CRITICAL REGION
        
        *((int *) arg) += 1; // Increment local counter (safe outside lock)
    }
}

Key Explanations:

  1. The Lock: Before touching shared.nput or shared.buff, the thread must acquire the lock. If another producer has it, this thread waits.
  2. The Critical Region: Inside the lock, we safely update the array and increment the counters.
  3. The Unlock: Immediately after the shared variables are updated, we unlock. This lets other waiting threads proceed.
  4. Efficiency: Notice the line *((int *) arg) += 1; is outside the lock. This updates a counter specific to this thread. We keep it outside to keep the critical region as short as possible.
3. The Result

If you run this program:

  • With Mutexes: The consumer reads the array and finds it perfectly sequential (0, 1, 2, 3...).
  • Without Mutexes: The consumer finds garbage data (e.g., buffer[5] = 7 or missing numbers) because threads overwrote each other.
A Critical Limitation

While this code is safe, it has a “logical” flaw that leads us to the next section. In this example, the Consumer is started after the producers are finished. What if we want the Consumer to process data while the Producers are creating it? If the consumer runs and finds the buffer empty (nput == 0), it has to wait. Using a mutex, it would have to loop endlessly (“spinning”), constantly locking and unlocking to check if data arrived. This wastes massive amounts of CPU.