BS
BleepingSwift
Published on

> Resource Isolation in Swift Using GCD

Authors
  • avatar
    Name
    Mick MacCallum
    Twitter
    @0x7fs

Grand Central Dispatch has been the standard approach to concurrency on Apple platforms since 2009. While Swift's modern concurrency features offer compile-time safety, GCD remains relevant—particularly when you need synchronous thread-safe access or when you're maintaining code that predates async/await.

The core idea is simple: instead of protecting data with locks, you serialize access through a queue. Only one block executes on a serial queue at a time, so if all access to shared state goes through that queue, you get thread safety.

Serial Queues for Isolation

The basic pattern wraps a private queue around your mutable state:

class ThreadSafeCache {
    private let queue = DispatchQueue(label: "com.app.cache")
    private var storage: [String: Any] = [:]

    func get(_ key: String) -> Any? {
        queue.sync {
            storage[key]
        }
    }

    func set(_ key: String, value: Any) {
        queue.sync {
            storage[key] = value
        }
    }
}

Every access to storage happens on queue. The sync call blocks the calling thread until the queue finishes, which gives you a synchronous API that's safe to call from any thread.

Sync vs Async

Use sync when you need the result immediately:

func getValue() -> Int {
    return queue.sync { self.value }
}

Use async when you're writing and don't need to wait:

func setValue(_ newValue: Int) {
    queue.async { self.value = newValue }
}

Async writes are faster for callers because they don't block, but they introduce eventual consistency. The value isn't guaranteed to be written by the time setValue returns. For most use cases this is fine, but be aware of it.

The Reader-Writer Pattern

If reads far outnumber writes, you can allow concurrent reads while still serializing writes. This uses a concurrent queue with a barrier:

class ReaderWriterCache {
    private let queue = DispatchQueue(label: "com.app.cache", attributes: .concurrent)
    private var storage: [String: Any] = [:]

    func get(_ key: String) -> Any? {
        queue.sync {
            storage[key]
        }
    }

    func set(_ key: String, value: Any) {
        queue.async(flags: .barrier) {
            self.storage[key] = value
        }
    }
}

Multiple threads can read simultaneously through regular sync calls. But when a write comes in with the .barrier flag, the queue waits for all current reads to finish, executes the write exclusively, then resumes concurrent reads.

This pattern shines for read-heavy workloads like caches or configuration stores. For write-heavy workloads, the barrier overhead may negate the benefits.

Avoiding Deadlocks

The most common GCD mistake is calling sync on a queue you're already on:

class Broken {
    private let queue = DispatchQueue(label: "com.app.broken")

    func outer() {
        queue.sync {
            inner()  // Deadlock!
        }
    }

    func inner() {
        queue.sync {
            // Never executes
        }
    }
}

The outer call holds the queue, then inner tries to sync on the same queue. Since sync blocks until the work completes, and the work can't start until the outer block finishes, you're stuck.

Solutions include using async for the inner call, checking if you're already on the queue (though this gets messy), or restructuring so nested calls don't need synchronization:

class Fixed {
    private let queue = DispatchQueue(label: "com.app.fixed")
    private var storage: [String: Any] = [:]

    func outer() {
        queue.sync {
            innerUnsafe()  // Call the unsynchronized version
        }
    }

    // Public synchronized version
    func inner() {
        queue.sync {
            innerUnsafe()
        }
    }

    // Private unsynchronized version for internal use
    private func innerUnsafe() {
        // Work with storage directly
    }
}

This "unsafe" pattern is common but error-prone. You have to remember which methods are synchronized and which aren't.

Target Queues

You can create queue hierarchies using target queues. A common pattern protects multiple related caches with a single serial queue:

class DataManager {
    private let isolation = DispatchQueue(label: "com.app.datamanager")

    private lazy var userCacheQueue: DispatchQueue = {
        let queue = DispatchQueue(label: "com.app.usercache")
        queue.setTarget(queue: isolation)
        return queue
    }()

    private lazy var settingsCacheQueue: DispatchQueue = {
        let queue = DispatchQueue(label: "com.app.settingscache")
        queue.setTarget(queue: isolation)
        return queue
    }()
}

Both child queues ultimately serialize through isolation, ensuring that operations across both caches are atomic when needed. This is useful for complex data models where consistency spans multiple collections.

Dispatch Groups for Coordination

When you need to wait for multiple async operations, dispatch groups help:

class BatchProcessor {
    private let queue = DispatchQueue(label: "com.app.batch", attributes: .concurrent)

    func process(items: [Item], completion: @escaping ([Result]) -> Void) {
        var results = [Result]()
        let resultsQueue = DispatchQueue(label: "com.app.results")
        let group = DispatchGroup()

        for item in items {
            group.enter()
            queue.async {
                let result = self.processItem(item)
                resultsQueue.sync {
                    results.append(result)
                }
                group.leave()
            }
        }

        group.notify(queue: .main) {
            completion(results)
        }
    }
}

The group tracks how many tasks are in flight. notify fires when all have completed.

When GCD Makes Sense

GCD remains the right choice in several situations.

When you need synchronous thread-safe access, GCD delivers it directly. Actors require async/await, which means you can't use them in synchronous contexts like property getters or during app launch before any async context exists.

When you're maintaining legacy code, converting to actors can require significant refactoring. If the GCD code works and is well-tested, there may be no practical benefit to rewriting it.

When you need fine-grained control over execution, GCD's quality-of-service classes, target queues, and barrier flags give you precise control that actors abstract away.

When working with APIs that expect dispatch queues, many Apple frameworks use completion handlers dispatched to queues you provide. Mixing these with actors requires careful bridging.

When to Avoid GCD

GCD's weakness is that thread safety is your responsibility. The compiler won't stop you from accessing shared state without proper synchronization. One forgotten queue.sync and you have a race condition.

GCD also makes it easy to create deadlocks, especially as code evolves and call patterns change. The "don't sync on a queue you might already be on" rule is simple to state but hard to enforce across a large codebase.

If you're starting a new project with Swift's modern concurrency, actors are usually the better default. They provide the same isolation guarantees with compiler enforcement. Reserve GCD for the specific cases where you need its synchronous access or compatibility benefits.

For a comparison with other approaches, see Choosing the Right Resource Isolation Strategy in Swift.

subscribe.sh

// Stay Updated

Get notified when I publish new tutorials on Swift, SwiftUI, and iOS development. No spam, unsubscribe anytime.

>

By subscribing, you agree to our Privacy Policy.