[swift-evolution] access control proposal
Brent Royal-Gordon
brent at architechies.com
Sun Dec 13 19:53:37 CST 2015
>> But it’s not zero-cost. It’s another thing to learn, another thing to think about, another way you have to mentally analyze your code.
>
> I meant zero performance cost. Of course all features have "cost" if we mean "cognitive overhead". Type safety for example has a huge cognitive overhead. Think back to the days of "Bool is not an NSString". But the benefit of typesafety is large.
>
> In this case, the cognitive overhead is small, and so is the benefit. But I think the value-per-unit-cost is similar. In both cases the compiler helps you not do something very very bad, that is hard to debug in ObjC.
And I’m saying that I think the benefit is even smaller than you think it is, because you can usually get the same benefits by other means, and the resulting code will be *even safer* than `local` would give you. Again, consider the “value protected by a queue” case. Using `local` here limits the amount of code which could contain an access-without-synchronization bug, but `Synchronized` *completely eliminates* that class of bugs. Synchronized is a *better*, *safer* solution than `local`.
I believe that *most*—certainly not all, but most—uses of `local` are like this. `local` would improve the code's safety, but some sort of refactoring would be even better. Because of that, I don’t think `local` is as valuable as you think it is.
>> many people, when they do that, find this idea wanting.
>
> Who? You? Then build an argument around that. I don't know who "many people" are or what their justification is.
I’m sorry, I don’t mean to make it sound like I’m speaking for some big, ill-defined posse. I just mean that different people will draw the line on the necessary cost-to-benefit in different places, and for some, this feature will fall on the wrong side of the line. People who don’t like this feature don’t misunderstand it; they just have a different subjective assessment of its value.
> My justification is essentially that A) something like Synchronized is a problem nearly everybody has and B) the difficulty of defining a class-based solution in an optimal way.
>
> On B, we seem to agree:
>
>> it might be difficult to construct a Synchronized instance correctly.
>
> So I can only conclude you disagree about A. However, I think my A is much stronger than is strictly necessary to demand a language feature. There are plenty of language features that not everyone uses, so the fact that you don't have a need for it (or even "many people") is not really a counterargument I am able to understand.
As far as a standard Synchronized is concerned, I agree with you that it’s a good idea. I am simply *worried* that we may have trouble coming up with a design that’s flexible enough to accommodate multiple synchronization methods, but still makes it easy to create a properly-configured Synchronized object. For example, something like this would be so complicated to configure that it would virtually defeat the purpose of having a Synchronized type:
class Synchronized<Value> {
init(value: Value, mutableRunner: (Void -> Void) -> Void, immutableRunner: (Void -> Void) -> Void) { … }
…
}
So I’m going to think out loud about this for a minute. All code was written in Mail.app and is untested.
I suppose we start with a protocol. The ideal interface for Synchronized would look something like this:
protocol SynchronizedType: class {
typealias Value
func withMutableValue<R>(mutator: (inout Value) throws -> R) rethrows -> R
func withValue<R>(accessor: Value throws -> R) rethrows -> R
}
You could obviously write separate types like:
class QueueSynchronized<Value>: SynchronizedType {
private var value: Value
private let queue: dispatch_queue_t
init(value: Value, queue: dispatch_queue_t = dispatch_queue_create(“QueueSynchronized”, DISPATCH_QUEUE_CONCURRENT)) {
self.value = value
self.queue = queue
}
func withMutableValue<R>(@noescape mutator: (inout Value) throws -> R) rethrows -> R {
var ret: R?
var blockError: NSError?
dispatch_barrier_sync(queue) {
do {
ret = try mutator(&value)
}
catch {
blockError = error
}
}
if let error = blockError {
throw error
}
return ret!
}
func withValue<R>(@noescape accessor: Value throws -> R) rethrows -> R {
var ret: R?
var blockError: NSError?
dispatch_sync(queue) {
do {
ret = try accessor(value)
}
catch {
blockError = error
}
}
if let error = blockError {
throw error
}
return ret!
}
}
and:
class NSLockSynchronized<Value>: SynchronizedType {
private var value: Value
private var lock: NSLock
init(value: Value, lock: NSLock = NSLock()) {
self.value = value
self.lock = lock
}
func withMutableValue<R>(@noescape mutator: (inout Value) throws -> R) rethrows -> R {
lock.lock()
defer { lock.unlock() }
return try mutator(&value)
}
func withValue<R>(@noescape accessor: Value throws -> R) rethrows -> R {
// XXX I don’t know how to get concurrent reads with Cocoa locks.
lock.lock()
defer { lock.unlock() }
return try accessor(value)
}
}
But that’s not a very satisfying design—so much boilerplate. Maybe we make the thing you’re synchronizing *on* be the protocol?
protocol SynchronizerType: class {
func synchronizeForReading(@noescape accessor: Void -> Void)
func synchronizeForWriting(@noescape mutator: Void -> Void)
}
final class Synchronized<Value> {
private var value = Value
private let synchronizer: SynchronizerType
init(value: Value, on synchronizer: SynchronizerType) {
self.value = value
self.synchronizer = synchronizer
}
func withMutableValue<R>(@noescape mutator: (inout Value) throws -> R) rethrows -> R {
var ret: R?
var blockError: NSError?
synchronizer.synchronizeForWriting {
do {
ret = try mutator(&value)
}
catch {
blockError = error
}
}
if let error = blockError {
throw error
}
return ret!
}
func withValue<R>(@noescape accessor: Value throws -> R) rethrows -> R {
var ret: R?
var blockError: NSError?
synchronizer.synchronizeForReading {
do {
ret = try accessor(value)
}
catch {
blockError = error
}
}
if let error = blockError {
throw error
}
return ret!
}
}
extension dispatch_queue: SynchronizerType {
func synchronizeForReading(@noescape accessor: Void -> Void) {
dispatch_sync(self, accessor)
}
func synchronizeForWriting(@noescape mutator: Void -> Void) {
dispatch_barrier_sync(self, mutator)
}
}
extension NSLock: SynchronizerType {
func synchronizeForReading(@noescape accessor: Void -> Void) {
// XXX I don’t know how to get concurrent reads with Cocoa locks.
lock()
accessor()
unlock()
}
func synchronizeForWriting(@noescape mutator: Void -> Void) {
lock()
mutator()
unlock()
}
}
Huh. That’s…actually pretty clean. And I suppose if you want it to default to using a dispatch queue, that’s just a default value on Synchronized.init(value:on:), or a new init(value:) added in an extension in LibDispatch. I thought this would be a lot hairier.
So, yeah, I guess I like the idea of a standard Synchronized. A well-designed version (one with something like SynchronizerType) can work with multiple locking mechanisms, without resorting to something error-prone like passing in closures. +1 for that.
--
Brent Royal-Gordon
Architechies
More information about the swift-evolution
mailing list