[swift-evolution] Compile-time generic specialization

Robert Widmann devteam.codafi at gmail.com
Sun Feb 5 10:35:27 CST 2017


I don't understand how this change would cause method dispatch to invoke a different prototype.  Specialization in either language mentioned doesn't do that.

~Robert Widmann

2017/02/05 11:28、Abe Schneider via swift-evolution <swift-evolution at swift.org> のメッセージ:

> Hi all,
> 
> The current behavior of generics in Swift causes it lose type information at compile time due to the desire of maintaining a single version of the function. This runs counter to how c++ works, which creates a new copy of a function per type, but preserves information to be preserved. This can cause unexpected behavior from the user’s perspective:
> 
>    protocol DispatchType {}
>    class DispatchType1: DispatchType {}
> 
>    func doBar<D:DispatchType>(value:D) {    
>        print(“General function called")
>    }
> 
>    func doBar(value:DispatchType1) {
>        print("DispatchType1 called")
>    }
> 
>    func test<D:DispatchType>(value:D) {
>        doBar(value: value)
>    }
> 
>    test(value: d1)     // “General function called”, but it’s not obvious why
> 
> 
> The suggested method to get around this issue is to use a protocol to create a witness table, allowing for runtime dispatch. However, this approach is not ideal in all cases because: (a) the overhead of runtime dispatch may not be desirable, especially because this is something that can be determined at compile time; and (b) there are some designs in which this behavior can complicate things.
> 
> One example of a design where this behavior can be problematic is when a protocol is used to determine what functions get dispatched:
> 
>    protocol Storage { … }
>    class Tensor<S:Storage> { … }
> 
>    class CBlasStorage: Storage { … }
>    class OpenCLStorage: Storage { … }
> 
>    func dot<S:Storage>(_ lhs:Tensor<S>, _ rhs:Tensor<S>) -> Tensor<S> { … }
> 
>    // like behavior, these will not work if called from another generic function (but will work for non-generic functions)
>    func dot<S:Storage>(_ lhs:Tensor<S>, _ rhs:Tensor<S>) -> Tensor<S> where S:CBlasStorage { … }
>    func dot<S:Storage>(_ lhs:Tensor<S>, _ rhs:Tensor<S>) -> Tensor<S> where S:OpenCLStorage { … }
> 
> In this case, depending on the underlying storage, we want an optimized version of `dot` to be called. To make this work correctly we can add static methods to `Tensor`, but this has several drawbacks: (a) it makes the `Tensor` class monolithic, every possible method must be determine a priori and be defined in the class; (b) it doesn’t allow new methods to be added Tensor without touching the main class; and (c) it unnecessarily forces users to user the more verbose `Tensor.dot(a, b)`.
> 
> Point (a) in theory could be made better by creating a `TensorOps` protocols. However, because type constraints cannot currently be placed on extensions, it is not currently possible to implement.
> 
> 
> One potential solution would be to add/extend an attribute for generic functions that would force multiple versions of that function to be created. There is already there is a `@_specialize` attribute, but you have to: (a) manually write out all the cases you want to cover; and (b) only affects the compiled code, which does not change this behavior. Due to the fact that `@_specialize` exists, I’m going to assume it wouldn’t be a major change to the language to extend the behavior to compile-time dispatch.
> 
> 
> Thanks!
> Abe
> _______________________________________________
> swift-evolution mailing list
> swift-evolution at swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution


More information about the swift-evolution mailing list