mick@next:~/objc/barrier-queues-gcd$/* Rediscovering the language that built the future */
Barrier Queues in GCD - Thread-Safe Collections Without Locks
objc / barrier-queues-gcd.m

Barrier Queues in GCD - Thread-Safe Collections Without Locks

Mick MacCallumMick MacCallum
@Objective-C@Concurrency@GCD

If you've ever needed to make a collection thread-safe in Objective-C, your first instinct might be to reach for @synchronized or NSLock. But there's a more elegant solution hiding in Grand Central Dispatch: barrier queues.

The Reader-Writer Problem

Consider a cache that many threads read from but occasionally update. Using a simple lock means readers block each other, even though concurrent reads are perfectly safe. What we really want is unlimited concurrent reads, but exclusive access during writes.

@interface ThreadSafeCache : NSObject
@property (nonatomic, strong) NSMutableDictionary *storage;
@property (nonatomic, strong) dispatch_queue_t isolationQueue;
@end

@implementation ThreadSafeCache

- (instancetype)init {
    self = [super init];
    if (self) {
        _storage = [NSMutableDictionary dictionary];
        _isolationQueue = dispatch_queue_create(
            "com.app.cache.isolation",
            DISPATCH_QUEUE_CONCURRENT
        );
    }
    return self;
}

The key is creating a concurrent queue. Multiple blocks can execute simultaneously on concurrent queues—unless you submit a barrier block.

Barrier Blocks: Exclusive Access

When you submit a block with dispatch_barrier_async, GCD waits for all previously submitted blocks to finish, then runs your barrier block alone, and only then resumes normal concurrent execution.

- (void)setObject:(id)object forKey:(NSString *)key {
    dispatch_barrier_async(self.isolationQueue, ^{
        self.storage[key] = object;
    });
}

- (id)objectForKey:(NSString *)key {
    __block id result;
    dispatch_sync(self.isolationQueue, ^{
        result = self.storage[key];
    });
    return result;
}

Reads use regular dispatch_sync on the concurrent queue, so they can happen simultaneously. Writes use dispatch_barrier_async, which guarantees exclusive access without blocking the caller.

Why Not Just Use a Serial Queue?

A serial queue would also prevent concurrent access, but it serializes everything—including reads. With a barrier queue, you get the best of both worlds. Ten threads reading simultaneously? No problem. One thread writing? Everyone else waits, but only for that brief moment.

The performance difference becomes dramatic under read-heavy workloads. I've seen 10x improvements in throughput for caches where reads outnumber writes 100:1.

Synchronous Barriers

Sometimes you need to ensure a write completes before continuing. Use dispatch_barrier_sync for this:

- (void)setObject:(id)object forKey:(NSString *)key waitUntilDone:(BOOL)wait {
    if (wait) {
        dispatch_barrier_sync(self.isolationQueue, ^{
            self.storage[key] = object;
        });
    } else {
        dispatch_barrier_async(self.isolationQueue, ^{
            self.storage[key] = object;
        });
    }
}

Be careful with synchronous barriers though—calling one from the same queue causes deadlock.

A Complete Thread-Safe Array

Here's a practical example wrapping NSMutableArray:

@interface ThreadSafeArray<ObjectType> : NSObject

- (void)addObject:(ObjectType)object;
- (void)removeObjectAtIndex:(NSUInteger)index;
- (ObjectType)objectAtIndex:(NSUInteger)index;
- (NSUInteger)count;
- (void)enumerateObjectsUsingBlock:(void (^)(ObjectType obj, NSUInteger idx, BOOL *stop))block;

@end

@implementation ThreadSafeArray {
    NSMutableArray *_array;
    dispatch_queue_t _queue;
}

- (instancetype)init {
    self = [super init];
    if (self) {
        _array = [NSMutableArray array];
        _queue = dispatch_queue_create("com.app.threadsafe.array", DISPATCH_QUEUE_CONCURRENT);
    }
    return self;
}

- (void)addObject:(id)object {
    dispatch_barrier_async(_queue, ^{
        [self->_array addObject:object];
    });
}

- (void)removeObjectAtIndex:(NSUInteger)index {
    dispatch_barrier_async(_queue, ^{
        [self->_array removeObjectAtIndex:index];
    });
}

- (id)objectAtIndex:(NSUInteger)index {
    __block id result;
    dispatch_sync(_queue, ^{
        result = self->_array[index];
    });
    return result;
}

- (NSUInteger)count {
    __block NSUInteger result;
    dispatch_sync(_queue, ^{
        result = self->_array.count;
    });
    return result;
}

- (void)enumerateObjectsUsingBlock:(void (^)(id, NSUInteger, BOOL *))block {
    dispatch_sync(_queue, ^{
        [self->_array enumerateObjectsUsingBlock:block];
    });
}

@end

Gotchas

One thing that trips people up: barriers only work on queues you create yourself. Using dispatch_barrier_async on a global concurrent queue does nothing special—it just behaves like a regular async dispatch. The system can't let your barrier block all other work happening on a shared queue.

Also, don't create too many concurrent queues. Each one has overhead. For multiple independent caches, consider using a single queue with key-based synchronization, or just accept separate queues if the isolation is truly independent.

Barrier queues represent GCD at its finest: a simple API that solves a complex problem. Next time you're tempted to reach for a lock, consider whether a barrier might be cleaner.