[swift-evolution] Covariance and Contravariance

Joe Groff jgroff at apple.com
Wed Dec 9 11:10:46 CST 2015


> On Dec 9, 2015, at 12:04 AM, John McCall via swift-evolution <swift-evolution at swift.org> wrote:
> 
>> On Dec 8, 2015, at 11:47 PM, Simon Pilkington via swift-evolution <swift-evolution at swift.org <mailto:swift-evolution at swift.org>> wrote:
>> 
>> Hi,
>> 
>> Is providing Covariance and Contravariance[1] of generics going to be part of the work on generics for Swift 3? I am sure this topic has come up within the core team and I was wondering what their opinion on the topic was.
>> 
>> I can see this as beneficial as it would allow the compiler - in conjunction with type inference - to retain more type information and hence allow code be more type safe. For example -
>> 
>> class ConcreteClass<GenType : GenericType> {
>>     ...
>>     
>>     func getFunction() -> GenType {
>>         ...
>>     }
>>     
>>     func putFunction(input: GenType) -> Bool {
>>         return ...
>>     }
>>     
>> }
>> 
>> protocol GenericType {
>>     ...
>> }
>> 
>> class GenericType1 : GenericType {
>>     ...
>> }
>> 
>> class GenericType2 : GenericType {
>>     ...
>> }
>> 
>> let array = [ConcreteClass<GenericType2>(...), ConcreteClass<GenericType1>(...)]
>> 
>> let x : GenericType = array[0].getFunction() // this would compile as array would be of type ConcreteClass<types that extend GenericType>
>>                       // currently array is of type AnyObject so this line doesn’t compile
>> array[0].putFunction(…) // this would still not compile as it would break type guarantees
>> 
>> As a downside I can see it as making generics more complex and difficult to understand. On balance I think probably the benefit in improved type safety is worth it but I was interested in what others thought.
> 
> One challenge here is that subtyping in Swift doesn’t mean equivalence of representation.  For example, Int is a subtype of Int?, but the later requires extra space to store in memory.  So while it would make sense to allow, say, [Int] to be a subtype of [Int?], the actual conversion at runtime wouldn’t be trivial — we’d either need to eagerly apply the element-wise transform, or Array would need some ability to apply it lazily.  Both come with fairly serious performance costs.

Another thing that makes covariance in Swift interesting compared to other OO languages is value semantics. Mutation operations on value types like Array and Dictionary can be safely covariant, whereas this is unsafe in Java or C#. This is great for expressivity, but it means that a generalized covariance proposal needs to do some legwork to decide what operations exactly can be safely covariant, and how far the compiler can verify that. I haven't thought extensively about this, but it seems potentially complex.

-Joe

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-evolution/attachments/20151209/fe041f04/attachment.html>


More information about the swift-evolution mailing list