[swift-evolution] Compile-time generic specialization
Abe Schneider
abe.schneider at gmail.com
Fri Feb 10 10:13:59 CST 2017
Hi Joe,
The issue re-dispatching from a function is that it can make
maintenance of the library difficult. For every function I define I
would need to have a large if-else tree. This means that the
introduction of both new functions and storage types becomes
expensive. For example, if I had:
func dot<S:Storage>(_ lhs:Tensor<S>, _ rhs:Tensor<S>) -> Tensor<S> {
if let s = lhs as? Tensor<NativeStorage<Float>> { ... }
else if if let s = lhs as? Tensor<NativeStorage<Double>> { ... }
else if let s = lhs as? Tensor<NativeStorage<Int>> { ... }
if let s = lhs as? Tensor<CBlasStorage<Float>> { ... }
else if if let s = lhs as? Tensor<CBlasStorage<Double>> { ... }
else if let s = lhs as? Tensor< CBlasStorage <Int>> { ... }
if let s = lhs as? Tensor<OpenCLStorage<Float>> { ... }
else if if let s = lhs as? Tensor< OpenCLStorage <Double>> { ... }
else if let s = lhs as? Tensor< OpenCLStorage <Int>> { ... }
}
with the same number of Impls to go along. If I added a new storage
type (e.g. CUDA) I would have to add each type specified (and I
haven't added Byte and Short) to every function that can be performed
on a Tensor (which is currently ~20-30). For my library this doesn't
lead to maintainable code.
In C++ this is what templates are supposed to help solve. In Swift,
generics solve this problem if called from a non-generic function or
if your generic is defined in a protocol/class, so it would seem to
fall within the pattern of what should be expected from generics.
Thanks!
Abe
On Wed, Feb 8, 2017 at 12:55 PM, Joe Groff <jgroff at apple.com> wrote:
>
> On Feb 6, 2017, at 10:06 AM, Douglas Gregor via swift-evolution
> <swift-evolution at swift.org> wrote:
>
>
> On Feb 5, 2017, at 5:36 PM, Abe Schneider via swift-evolution
> <swift-evolution at swift.org> wrote:
>
> Hi Robert,
>
> Exactly. The benefit being that you can figure out the correct function to
> dispatch entirely at compile time. My understanding is that Swift doesn’t do
> this because of the associated code bloat (and it’s usually not necessary).
> However, I think there is some important functionality by allowing
> specialization to control dispatch in a similar way to c++. There is also
> the design element — my (fairly) succinct Tensor class that used to be ~300
> lines is now already close to an additional 1000 lines of code and growing.
> While the type of library I’m writing might be outside of what is normally
> done with Swift, I suspect the design pattern I’m using crops up in other
> places, as well as the need for dispatch on specialization (e.g.
> http://stackoverflow.com/questions/41640321/extending-collection-with-a-recursive-property-method-that-depends-on-the-elemen).
>
>
> You can’t figure out the correct function to dispatch entirely at compile
> time because Swift supports retroactive modeling. Let’s make this a
> super-simple example:
>
> // Module A
> public protocol P { }
> public func f<T>(_:T) { print(“unspecialized”) }
> public func f<T: P>(_: T) { print(“specialized”) }
>
> public func g<T>(_ x: T) { f(x) }
>
> // Module B
> import A
> func testG(x: Int) {
> g(x) // the best we can statically do is print “unspecialized”; Int
> doesn’t conform to A.P, but...
> }
>
> // Module C
> import A
> public extension A: P { } // dynamically, Int does conform to A.P!
>
> Swift’s model is that the selection among ad hoc overloads is performed
> statically based on local knowledge, and is consistent across all
> “specializations” of a generic function. Protocol requirements and
> overridable methods are the customization points.
>
> Selecting ad hoc overloads at runtime is possible, but of course it has
> downsides. You could run into run-time ambiguities, for example:
>
> // Module A
> public protocol P { }
> public protocol Q { }
> public func f<T>(_:T) { print(“unspecialized”) }
> public func f<T: P>(_: T) { print(“specialized for P”) }
> public func f<T: Q>(_: T) { print(“specialized for Q”) }
>
> public func g<T>(_ x: T) { f(x) }
>
> // Module B
> import A
> public extension Int: P { }
>
> // Module C
> import A
> public extension Int: Q { }
>
> // Module C
> import A
> func testG(x: Int) {
> g(x) // run-time ambiguity: which specialized “f” do we get?
> }
>
> There are reasonable answers here if we know what the potential set of
> overloads is at compile-time. It’s a problem I’ve been interested in for a
> long time. That dynamic dispatch can be implemented somewhat reasonably (the
> compiler can emit a static decision tree so long as we’re willing to limit
> the set of overloads to the ones that are visible from g(_:), and can be
> folded away by the optimizer when we’re specializing the function and the
> visibility of the types and/or protocols in question is limited.
>
> As far as changes to Swift, `@_specialize` already does exactly this (except
> it is treated as a hint). You would need to transform the function to
> something like <function-name>_<mangled-type-name>(…) and a table of
> transformed functions, but after that you can just treat the functions as
> normal functions (and ignore the fact they were defined as generic). So,
> yes, specializations should be forced at every level. While this will lead
> to some code bloat, since it only occurs for the functions marked by the
> user, I would imagine it’s: (a) limited to the extent it occurs; and (b)
> manageable by simply not using the attribute (and using protocol witness
> tables instead). But at least that way you give the user the choice to do
> what is best for the particular situation.
>
>
> For reference, `@_specialize` is doing dynamic dispatch. That dynamic
> dispatch gets optimized away when we specialize the generic function, the
> same way I mentioned about.
>
> There might be a reasonable solution to the problem you’re encountering. I
> don’t think it’s “force specialization at compile time like C++”, but
> something akin to grouping together multiple overloads where we want dynamic
> dispatch of callers that invoke them, statically diagnosing when that set of
> overloads can have ambiguities in it (see the paper I referenced above), and
> teaching the optimizers to resolve that dynamic dispatch statically whenever
> possible.
>
>
> Specialization handles casts, so you can handle some cases today by writing
> the implementation selection logic out:
>
> func f<T>(x: T) {
> if let i = x as? Int { fooImplForInt(i) }
> else if let f = x as? Float { fooImplForFloat(f) }
> else { fooImplForGeneric(x) }
> }
>
> And you'll get the `Int` or `Float` implementation when the specializer
> optimizes for T == Int or T == Float. This approach has some limitations
> today, since protocol existentials in particular have limited functionality,
> and you don't get type refinement of T from the cast. Perhaps we could
> support testing general type constraints as a form of statement condition,
> something like this:
>
> func f<T>(x: T) {
> if <T: P> {
> // we can assume T: P here
> x.fooImplFromP()
> } else if <T: Q> {
> // we can assume T: Q here
> x.fooImplFromQ()
> } else {
> fooImplForGeneric(x)
> }
> }
>
> I think the fact that one call site maps to one implementation is a feature,
> and dispatch by control flow is easier to understand than overload
> resolution. If we want the optimizer to still be able to specialize
> reliably, the set of overloads that could be dispatched to would likely have
> to be closed anyway.
>
> -Joe
More information about the swift-evolution
mailing list