Mutexes, Condition Variables, and Synchronization
The previous chapter showed that two threads incrementing a shared counter lose updates. This chapter fixes that with mutexes, condition variables, and read-write locks. We start with C's pthreads primitives, then show how Rust wraps the data inside the lock itself.
The Race Condition, Concretely
Here is the broken counter again for reference:
/* race.c */
#include <stdio.h>
#include <pthread.h>
static int counter = 0;
void *increment(void *arg) {
(void)arg;
for (int i = 0; i < 1000000; i++)
counter++;
return NULL;
}
int main(void) {
pthread_t t1, t2;
pthread_create(&t1, NULL, increment, NULL);
pthread_create(&t2, NULL, increment, NULL);
pthread_join(t1, NULL);
pthread_join(t2, NULL);
printf("Expected 2000000, got %d\n", counter);
return 0;
}
The CPU executes counter++ as three steps: load, increment, store. Two threads interleaving these steps lose updates.
Time Thread A Thread B counter
---- -------- -------- -------
1 load counter (100) 100
2 load counter (100) 100
3 add 1 -> 101 100
4 add 1 -> 101 100
5 store 101 101
6 store 101 101 <-- lost update
Mutex: The Fix
A mutex (mutual exclusion) ensures only one thread enters the critical section at a time.
/* mutex_counter.c */
#include <stdio.h>
#include <pthread.h>
static int counter = 0;
static pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
void *increment(void *arg) {
(void)arg;
for (int i = 0; i < 1000000; i++) {
pthread_mutex_lock(&lock);
counter++;
pthread_mutex_unlock(&lock);
}
return NULL;
}
int main(void) {
pthread_t t1, t2;
pthread_create(&t1, NULL, increment, NULL);
pthread_create(&t2, NULL, increment, NULL);
pthread_join(t1, NULL);
pthread_join(t2, NULL);
pthread_mutex_destroy(&lock);
printf("Expected 2000000, got %d\n", counter);
return 0;
}
Now the output is always 2000000. The mutex serializes access to counter.
The lifecycle of a mutex:
PTHREAD_MUTEX_INITIALIZER or pthread_mutex_init(&m, NULL)
|
v
pthread_mutex_lock(&m) <-- blocks if another thread holds it
|
v
[ critical section ]
|
v
pthread_mutex_unlock(&m)
|
v
pthread_mutex_destroy(&m)
Try It: Remove the
pthread_mutex_lock/unlockcalls and run the program 10 times. How much variance do you see in the output?
Dynamic Initialization
For mutexes allocated on the heap or inside a struct, use pthread_mutex_init:
/* mutex_dynamic.c */
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
typedef struct {
int value;
pthread_mutex_t lock;
} SafeCounter;
SafeCounter *safe_counter_new(void) {
SafeCounter *sc = malloc(sizeof(SafeCounter));
sc->value = 0;
pthread_mutex_init(&sc->lock, NULL);
return sc;
}
void safe_counter_inc(SafeCounter *sc) {
pthread_mutex_lock(&sc->lock);
sc->value++;
pthread_mutex_unlock(&sc->lock);
}
void safe_counter_free(SafeCounter *sc) {
pthread_mutex_destroy(&sc->lock);
free(sc);
}
int main(void) {
SafeCounter *sc = safe_counter_new();
safe_counter_inc(sc);
safe_counter_inc(sc);
printf("Counter: %d\n", sc->value);
safe_counter_free(sc);
return 0;
}
Deadlock
Deadlock occurs when two threads each hold a lock the other needs.
Thread A Thread B
-------- --------
lock(mutex_1) lock(mutex_2)
... ...
lock(mutex_2) <-- blocked lock(mutex_1) <-- blocked
DEADLOCK DEADLOCK
Prevention rules:
- Lock ordering -- always acquire locks in the same global order.
- Try-lock -- use
pthread_mutex_trylockand back off if it fails. - Avoid holding multiple locks whenever possible.
/* deadlock_fixed.c */
#include <stdio.h>
#include <pthread.h>
static pthread_mutex_t lock_a = PTHREAD_MUTEX_INITIALIZER;
static pthread_mutex_t lock_b = PTHREAD_MUTEX_INITIALIZER;
void *worker1(void *arg) {
(void)arg;
/* Always lock A before B */
pthread_mutex_lock(&lock_a);
pthread_mutex_lock(&lock_b);
printf("Worker 1 has both locks\n");
pthread_mutex_unlock(&lock_b);
pthread_mutex_unlock(&lock_a);
return NULL;
}
void *worker2(void *arg) {
(void)arg;
/* Same order: A before B */
pthread_mutex_lock(&lock_a);
pthread_mutex_lock(&lock_b);
printf("Worker 2 has both locks\n");
pthread_mutex_unlock(&lock_b);
pthread_mutex_unlock(&lock_a);
return NULL;
}
int main(void) {
pthread_t t1, t2;
pthread_create(&t1, NULL, worker1, NULL);
pthread_create(&t2, NULL, worker2, NULL);
pthread_join(t1, NULL);
pthread_join(t2, NULL);
return 0;
}
Caution: Deadlocks are silent -- the program just hangs. Use
pthread_mutex_timedlockin debug builds to detect them.
Condition Variables
A condition variable lets a thread sleep until some condition is true, without busy-waiting.
Classic pattern: producer-consumer queue.
/* condvar.c */
#include <stdio.h>
#include <pthread.h>
#include <stdbool.h>
#define QUEUE_SIZE 5
static int queue[QUEUE_SIZE];
static int count = 0;
static pthread_mutex_t mtx = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t not_empty = PTHREAD_COND_INITIALIZER;
static pthread_cond_t not_full = PTHREAD_COND_INITIALIZER;
void *producer(void *arg) {
(void)arg;
for (int i = 0; i < 20; i++) {
pthread_mutex_lock(&mtx);
while (count == QUEUE_SIZE) /* MUST be while, not if */
pthread_cond_wait(¬_full, &mtx);
queue[count++] = i;
printf("Produced %d (count=%d)\n", i, count);
pthread_cond_signal(¬_empty);
pthread_mutex_unlock(&mtx);
}
return NULL;
}
void *consumer(void *arg) {
(void)arg;
for (int i = 0; i < 20; i++) {
pthread_mutex_lock(&mtx);
while (count == 0) /* MUST be while, not if */
pthread_cond_wait(¬_empty, &mtx);
int val = queue[--count];
printf("Consumed %d (count=%d)\n", val, count);
pthread_cond_signal(¬_full);
pthread_mutex_unlock(&mtx);
}
return NULL;
}
int main(void) {
pthread_t prod, cons;
pthread_create(&prod, NULL, producer, NULL);
pthread_create(&cons, NULL, consumer, NULL);
pthread_join(prod, NULL);
pthread_join(cons, NULL);
pthread_mutex_destroy(&mtx);
pthread_cond_destroy(¬_empty);
pthread_cond_destroy(¬_full);
return 0;
}
Caution: Always check the condition in a
whileloop, not anif. Spurious wakeups are allowed by POSIX. The thread may wake up even though no one signaled the condvar.
The flow:
pthread_cond_wait(&cond, &mtx):
1. Atomically: unlock mtx + sleep on cond
2. When woken: re-lock mtx
3. Return (caller re-checks condition in while loop)
pthread_cond_signal(&cond):
Wake ONE waiting thread
pthread_cond_broadcast(&cond):
Wake ALL waiting threads
Read-Write Locks
When reads vastly outnumber writes, a read-write lock allows multiple simultaneous readers.
/* rwlock.c */
#include <stdio.h>
#include <pthread.h>
static int shared_data = 0;
static pthread_rwlock_t rwl = PTHREAD_RWLOCK_INITIALIZER;
void *reader(void *arg) {
int id = *(int *)arg;
pthread_rwlock_rdlock(&rwl);
printf("Reader %d sees %d\n", id, shared_data);
pthread_rwlock_unlock(&rwl);
return NULL;
}
void *writer(void *arg) {
(void)arg;
pthread_rwlock_wrlock(&rwl);
shared_data = 42;
printf("Writer set data to 42\n");
pthread_rwlock_unlock(&rwl);
return NULL;
}
int main(void) {
pthread_t r1, r2, w;
int id1 = 1, id2 = 2;
pthread_create(&w, NULL, writer, NULL);
pthread_create(&r1, NULL, reader, &id1);
pthread_create(&r2, NULL, reader, &id2);
pthread_join(w, NULL);
pthread_join(r1, NULL);
pthread_join(r2, NULL);
pthread_rwlock_destroy(&rwl);
return 0;
}
Rust: Mutex -- Data Inside the Lock
In C, the mutex and the data it protects are separate. You can forget to lock. In Rust, the data lives inside the Mutex<T>. You cannot access the data without locking.
// mutex_counter.rs use std::sync::{Arc, Mutex}; use std::thread; fn main() { let counter = Arc::new(Mutex::new(0)); let mut handles = vec![]; for _ in 0..2 { let counter = Arc::clone(&counter); let h = thread::spawn(move || { for _ in 0..1_000_000 { let mut num = counter.lock().unwrap(); *num += 1; } // MutexGuard dropped here -> unlock }); handles.push(h); } for h in handles { h.join().unwrap(); } println!("Result: {}", *counter.lock().unwrap()); }
Rust Note:
Mutex::lock()returns aMutexGuard<T>. This guard implementsDerefandDerefMutso you use it like a reference. When the guard is dropped, the mutex is automatically unlocked. You literally cannot forget to unlock.
Rust: RwLock
// rwlock.rs use std::sync::{Arc, RwLock}; use std::thread; fn main() { let data = Arc::new(RwLock::new(0)); let mut handles = vec![]; // spawn readers for id in 0..3 { let data = Arc::clone(&data); handles.push(thread::spawn(move || { let val = data.read().unwrap(); println!("Reader {} sees {}", id, *val); })); } // spawn writer { let data = Arc::clone(&data); handles.push(thread::spawn(move || { let mut val = data.write().unwrap(); *val = 42; println!("Writer set data to 42"); })); } for h in handles { h.join().unwrap(); } }
Rust: Condvar
// condvar.rs use std::sync::{Arc, Mutex, Condvar}; use std::thread; fn main() { let pair = Arc::new((Mutex::new(false), Condvar::new())); let pair_clone = Arc::clone(&pair); let producer = thread::spawn(move || { let (lock, cvar) = &*pair_clone; let mut ready = lock.lock().unwrap(); *ready = true; println!("Producer: data is ready"); cvar.notify_one(); }); let (lock, cvar) = &*pair; let mut ready = lock.lock().unwrap(); while !*ready { ready = cvar.wait(ready).unwrap(); } println!("Consumer: got the signal, ready = {}", *ready); producer.join().unwrap(); }
The Condvar::wait method takes the MutexGuard, releases the lock, sleeps, reacquires the lock, and returns a new guard. Same semantics as pthread_cond_wait, but type-safe.
Rust: Channels (mpsc)
Message passing avoids shared state entirely. Rust provides multi-producer, single-consumer channels.
// channel.rs use std::sync::mpsc; use std::thread; fn main() { let (tx, rx) = mpsc::channel(); let producer = thread::spawn(move || { for i in 0..5 { tx.send(i * i).unwrap(); } }); for val in rx { println!("Received: {}", val); } producer.join().unwrap(); }
When the tx (sender) is dropped, the rx iterator ends. Clean, simple, no locks.
For multiple producers, clone the sender:
// multi_producer.rs use std::sync::mpsc; use std::thread; fn main() { let (tx, rx) = mpsc::channel(); let mut handles = vec![]; for id in 0..3 { let tx = tx.clone(); handles.push(thread::spawn(move || { tx.send(format!("Hello from thread {}", id)).unwrap(); })); } drop(tx); // drop original sender so rx iterator terminates for msg in rx { println!("{}", msg); } for h in handles { h.join().unwrap(); } }
Driver Prep: The Linux kernel uses similar patterns:
wait_event/wake_upfor condition variables,spinlock_tfor short critical sections, andcompletionfor one-shot signaling. Message-passing patterns appear in kernel workqueues.
Why Rust's Mutex Is Better Than C's
C: mutex and data are separate
- You can access data without locking
- You can lock the wrong mutex
- You can forget to unlock
Rust: data is INSIDE the Mutex<T>
- You MUST lock to access data
- The lock guard auto-unlocks on drop
- The compiler enforces Send + Sync bounds
Try It: In the Rust
mutex_counter.rsexample, try removingArc::cloneand just movingcounterinto both closures. What error does the compiler give? Why?
Knowledge Check
- Why must the condition in a condition variable be checked in a
whileloop, not anif? - What is the difference between
pthread_cond_signalandpthread_cond_broadcast? - In Rust, what prevents you from accessing data protected by a
Mutex<T>without locking it?
Common Pitfalls
- Forgetting to unlock -- in C, every
lockmust have a matchingunlock, even on error paths. Use cleanup handlers or RAII wrappers. - Locking inside a loop body when you meant to lock outside it -- performance disaster from lock contention.
- Deadlock from inconsistent lock ordering -- establish a global order and document it.
- Using
ifinstead ofwhilewith condition variables -- spurious wakeups cause logic bugs. - Holding a lock while doing I/O -- blocks all other threads waiting on that lock. Keep critical sections short.
- Poisoned mutex in Rust -- if a thread panics while holding a
MutexGuard, the mutex is poisoned. Call.unwrap()or handle thePoisonError.