BS
BleepingSwift
Published on

> Resource Isolation in Swift Using Locks

Authors
  • avatar
    Name
    Mick MacCallum
    Twitter
    @0x7fs

Locks are the most primitive synchronization tool available. A thread acquires a lock, does its work, and releases it. Any other thread trying to acquire the same lock blocks until it's released. Simple, fast, and easy to misuse.

Swift doesn't have a native lock type, but you have several options from Foundation and the os framework. Each has different performance characteristics and tradeoffs.

NSLock

NSLock is the basic Foundation lock. It's straightforward to use:

class Counter {
    private let lock = NSLock()
    private var _count = 0

    var count: Int {
        lock.lock()
        defer { lock.unlock() }
        return _count
    }

    func increment() {
        lock.lock()
        defer { lock.unlock() }
        _count += 1
    }
}

The defer ensures the lock releases even if an error is thrown. Always use defer with locks—forgetting to unlock is a classic bug that causes mysterious hangs.

os_unfair_lock

For maximum performance, os_unfair_lock from the os framework is the fastest option on Apple platforms:

import os

class FastCounter {
    private var lock = os_unfair_lock()
    private var _count = 0

    var count: Int {
        os_unfair_lock_lock(&lock)
        defer { os_unfair_lock_unlock(&lock) }
        return _count
    }

    func increment() {
        os_unfair_lock_lock(&lock)
        defer { os_unfair_lock_unlock(&lock) }
        _count += 1
    }
}

The "unfair" means threads aren't guaranteed to acquire the lock in the order they requested it. A thread might starve if others keep grabbing the lock first. In practice this rarely matters for short critical sections, and the performance benefit is significant.

One gotcha: os_unfair_lock must not be moved in memory after first use. Storing it in a class (reference type) is safe. Storing it in a struct that gets copied is not.

NSRecursiveLock

Regular locks deadlock if the same thread tries to acquire them twice. Recursive locks allow this:

class TreeProcessor {
    private let lock = NSRecursiveLock()
    private var visited: Set<Node> = []

    func process(_ node: Node) {
        lock.lock()
        defer { lock.unlock() }

        guard !visited.contains(node) else { return }
        visited.insert(node)

        // Recursive calls re-acquire the same lock
        for child in node.children {
            process(child)
        }
    }
}

Use recursive locks sparingly. They're slower than regular locks, and needing them often indicates a design that could be simplified.

NSCondition and NSConditionLock

When threads need to wait for specific conditions, these help:

class BoundedBuffer<T> {
    private var buffer: [T] = []
    private let capacity: Int
    private let condition = NSCondition()

    init(capacity: Int) {
        self.capacity = capacity
    }

    func put(_ item: T) {
        condition.lock()
        defer { condition.unlock() }

        while buffer.count >= capacity {
            condition.wait()  // Release lock and sleep until signaled
        }

        buffer.append(item)
        condition.signal()  // Wake a waiting thread
    }

    func take() -> T {
        condition.lock()
        defer { condition.unlock() }

        while buffer.isEmpty {
            condition.wait()
        }

        let item = buffer.removeFirst()
        condition.signal()
        return item
    }
}

Producers wait when the buffer is full. Consumers wait when it's empty. Signals wake waiting threads when conditions change.

A Safe Wrapper Pattern

Raw lock APIs are easy to misuse. A wrapper can enforce correct usage:

final class Locked<Value> {
    private var lock = os_unfair_lock()
    private var _value: Value

    init(_ value: Value) {
        self._value = value
    }

    var value: Value {
        os_unfair_lock_lock(&lock)
        defer { os_unfair_lock_unlock(&lock) }
        return _value
    }

    func mutate<T>(_ mutation: (inout Value) -> T) -> T {
        os_unfair_lock_lock(&lock)
        defer { os_unfair_lock_unlock(&lock) }
        return mutation(&_value)
    }
}

Usage becomes cleaner:

class UserManager {
    private let users = Locked<[String: User]>([:])

    func addUser(_ user: User) {
        users.mutate { $0[user.id] = user }
    }

    func getUser(_ id: String) -> User? {
        users.value[id]
    }
}

You can't forget to unlock because the wrapper handles it.

Read-Write Locks

When reads vastly outnumber writes, pthread_rwlock allows concurrent readers:

import Darwin

class ReadWriteCache<Key: Hashable, Value> {
    private var rwlock = pthread_rwlock_t()
    private var storage: [Key: Value] = [:]

    init() {
        pthread_rwlock_init(&rwlock, nil)
    }

    deinit {
        pthread_rwlock_destroy(&rwlock)
    }

    func get(_ key: Key) -> Value? {
        pthread_rwlock_rdlock(&rwlock)
        defer { pthread_rwlock_unlock(&rwlock) }
        return storage[key]
    }

    func set(_ key: Key, value: Value) {
        pthread_rwlock_wrlock(&rwlock)
        defer { pthread_rwlock_unlock(&rwlock) }
        storage[key] = value
    }
}

Multiple threads can hold the read lock simultaneously. The write lock is exclusive. This pattern benefits read-heavy workloads like configuration stores or caches.

When Locks Make Sense

Locks are the right choice when you need the absolute lowest overhead. For protecting a single integer increment in a hot path, os_unfair_lock will outperform both actors and GCD queues.

Locks also work in synchronous contexts where actors can't. Property getters, initialization code, and callbacks from C libraries often can't be async.

When you're doing low-level systems programming or interfacing with C code that expects pthread-style synchronization, locks fit naturally.

The Dangers

Locks require discipline. The compiler won't remind you to acquire a lock before accessing shared state. One missed lock and you have a race condition.

Deadlocks are easy to create. If thread A holds lock 1 and waits for lock 2, while thread B holds lock 2 and waits for lock 1, both freeze forever. The standard advice—always acquire locks in the same order—is simple to state but hard to enforce across a large codebase.

Priority inversion can occur when a high-priority thread waits for a lock held by a low-priority thread. The low-priority thread might not get scheduled, causing the high-priority thread to wait indefinitely. Modern locks like os_unfair_lock have some protection against this, but it's still a concern.

Locks don't compose well. If you have two lock-protected resources and need to update both atomically, you need to acquire both locks—and now you're back to worrying about ordering and deadlocks.

Performance Comparison

In my testing on M1/M2 hardware, rough overhead per operation:

  • os_unfair_lock: ~20 nanoseconds
  • NSLock: ~30 nanoseconds
  • pthread_mutex: ~25 nanoseconds
  • GCD serial queue sync: ~200 nanoseconds
  • Actor method call: ~500+ nanoseconds (varies with contention)

These numbers vary significantly based on contention, cache effects, and what work you're doing inside the lock. But the magnitude difference is real: locks are an order of magnitude faster than queues for short critical sections.

Whether that matters depends on your workload. If you're locking millions of times per second, it matters. If you're locking a few times per user interaction, it doesn't.

Best Practices

Keep critical sections short. The longer you hold a lock, the more you serialize concurrent work.

Never do I/O or network calls while holding a lock. If the operation blocks, you block every other thread waiting for that lock.

Document which lock protects which state. In complex code, it's easy to lose track.

Consider the Locked<T> wrapper pattern to make correct usage the easy path.

Test with Thread Sanitizer enabled. It catches data races that locks should have prevented.

For more context on when to choose locks versus other approaches, see Choosing the Right Resource Isolation Strategy in Swift.

subscribe.sh

// Stay Updated

Get notified when I publish new tutorials on Swift, SwiftUI, and iOS development. No spam, unsubscribe anytime.

>

By subscribing, you agree to our Privacy Policy.