[swift-dev] Reconsidering the global uniqueness of type metadata and protocol conformance instances

John McCall rjmccall at apple.com
Fri Jul 28 18:52:12 CDT 2017

> On Jul 28, 2017, at 7:45 PM, Joe Groff <jgroff at apple.com> wrote:
>> On Jul 28, 2017, at 4:30 PM, John McCall <rjmccall at apple.com <mailto:rjmccall at apple.com>> wrote:
>>> On Jul 28, 2017, at 7:11 PM, Joe Groff <jgroff at apple.com <mailto:jgroff at apple.com>> wrote:
>>>> On Jul 28, 2017, at 3:54 PM, John McCall <rjmccall at apple.com <mailto:rjmccall at apple.com>> wrote:
>>>>> On Jul 28, 2017, at 6:34 PM, Joe Groff <jgroff at apple.com <mailto:jgroff at apple.com>> wrote:
>>>>>> On Jul 28, 2017, at 3:30 PM, John McCall <rjmccall at apple.com <mailto:rjmccall at apple.com>> wrote:
>>>>>>> On Jul 28, 2017, at 6:24 PM, Joe Groff <jgroff at apple.com <mailto:jgroff at apple.com>> wrote:
>>>>>>>> On Jul 28, 2017, at 3:15 PM, John McCall <rjmccall at apple.com <mailto:rjmccall at apple.com>> wrote:
>>>>>>>>> On Jul 28, 2017, at 6:02 PM, Andrew Trick via swift-dev <swift-dev at swift.org <mailto:swift-dev at swift.org>> wrote:
>>>>>>>>>> On Jul 28, 2017, at 2:20 PM, Joe Groff via swift-dev <swift-dev at swift.org <mailto:swift-dev at swift.org>> wrote:
>>>>>>>>>> The Swift runtime currently maintains globally unique pointer identities for type metadata and protocol conformances. This makes checking type equivalence a trivial pointer equality comparison, but most operations on generic values do not really care about exact type identity and only need to invoke value or protocol witness methods or consult other data in the type metadata structure. I think it's worth reevaluating whether having globally unique type metadata objects is the correct design choice. Maintaining global uniqueness of metadata instances carries a number of costs. Any code that wants type metadata for an instance of a generic type, even a fully concrete one, must make a potentially expensive runtime call to get the canonical metadata instance. This also greatly complicates our ability to emit specializations of type metadata, value witness tables, or protocol witness tables for concrete instances of generic types, since specializations would need to be registered with the runtime as canonical metadata objects, and it would be difficult to do this lazily and still reliably favor specializations over more generic witnesses. The lack of witness table specializations leaves an obnoxious performance cliff for instances of generic types that end up inside existential containers or cross into unspecialized code. The runtime also obligates binaries to provide the canonical metadata for all of their public types, along with all the dependent value witnesses, class methods, and protocol witness tables, meaning a type abstraction can never be completely "zero-cost" across modules.
>>>>>>>>>> On the other hand, if type metadata did not need to be unique, then the compiler would be free to emit specialized type metadata and protocol witness tables for fully concrete non-concrete value types without consulting the runtime. This would let us avoid runtime calls to fetch metadata in specialized code, and would make it much easier for us to implement witness specialization. It would also give us the ability to potentially extend the "inlinable" concept to public fragile types, making it a client's responsibility to emit metadata for the type when needed and keeping the type from affecting its home module's ABI. This could significantly reduce the size and ABI surface area of the standard library, since the standard library contains a lot of generic lightweight adapter types for collections and other abstractions that are intended to be optimized away in most use cases.
>>>>>>>>>> There are of course benefits to globally unique metadata objects that we would lose if we gave up uniqueness. Operations that do check type identity, such as comparison, hashing, and dynamic casting, would have to perform more expensive checks, and nonunique metadata objects would need to carry additional information to enable those checks. It is likely that class objects would have to remain globally unique, if for no other reason than that the Objective-C runtime requires it on Apple platforms. Having multiple equivalent copies of type metadata has the potential to increase the working set of an app in some situations, although it's likely that redundant compiler-emitted copies of value type metadata would at least be able to live in constant pages mapped from disk instead of getting dynamically instantiated by the runtime like everything is today. There could also be subtle source-breaking behavior for code that bitcasts metatype values to integers or pointers and expects bit-level equality to indicate type equality. It's unlikely to me that giving up uniqueness would buy us any simplification to the runtime, since the runtime would still need to be able to instantiate metadata for unspecialized code, and we would still want to unique runtime-instantiated metadata objects as an optimization.
>>>>>>>>>> Overall, my intuition is that the tradeoffs come out in favor for nonunique metadata objects, but what do you all think? Is there anything I'm missing?
>>>>>>>>>> -Joe
>>>>>>>>> In a premature proposal two years ago, we agreed to ditch unique protocol conformances but install the canonical address as the first entry in each specialized table.
>>>>>>>> This would be a reference to (unique) global data about the conformance, not a reference to some canonical version of the protocol witness table.  We do not rely on having a canonical protocol witness table.  The only reason we unique them (when we do need to instantiate) is because we don't want to track their lifetimes.
>>>>>>>>> That would mitigate the disadvantages that you pointed to. But, we would also lose the ability to emit specialized metadata/conformances in constant pages. How do you feel about that tradeoff?
>>>>>>>> Note that, per above, it's only specialized constant type metadata that we would lose.
>>>>>>>> I continue to feel that having to do structural equality tests on type metadata would be a huge loss.
>>>>>>> I don't think it necessarily needs to be deep structural equality. If the type metadata object or value witness table had a pointer to a mangled type name string, we could strcmp those strings to compare equality, which doesn't seem terribly onerous to me, though if it were we could perhaps use the string to lazily resolve the canonical type metadata pointer, sort of like we do with type metadata for imported C types today.
>>>>>> So generic code to instantiate type metadata would have to construct these mangled strings eagerly?
>>>>> We already do exactly that for the ObjC runtime name of generic class instantiations, for what it's worth, but it could conceivably be lazy as well, at the cost of making the comparison yet more expensive. There aren't that many runtime operations that need to do type comparison, though—the ones I can think of are casting and the equality/hashing operations on Any.Type—so how important is efficient type comparison?
>>>> A fair question.  It's extremely important for type uniquing — of course, you're talking about making that less important, but when it does happen, it will cost more.
>>>> The way I see it is that the importance of specialization is 95% about specializing tables of function pointers, i.e. value witness tables, protocol witness tables, and class v-tables.  There's no reason we can't use specialized protocol witness tables today.  Your proposal still leaves us uniquing class v-tables.  So this is just about making it easier (on us as implementors) to create specialized value witness tables, plus the trade-off of being able to refer to non-dependent type metadata slightly more cheaply vs. making type comparisons vastly more expensive.
>>> Well, what do you think about the possibility of making some public types in the standard library "always-emit-into-client"? AIUI a lot of the standard library's space and ABI surface area is spent on type metadata and conformances for things that almost always get inlined in practice, so I think there's also a potential to shrink the stdlib's size and ABI liability. (To be fair, we could also potentially accomplish that using the foreign metadata table today if it was interesting.)
>> Well, first, I think our metadata could pretty easily go on a diet, even apart from any question of laziness.  Value type metadata don't need to store a parent, nominal type descriptors are not optimized for compactness, generic patterns are extremely bloated, etc.  (Do we really even need a "pattern" to instantiate a type?)  All of this is stuff we need to do for ABI stability.
> Fair point, though there's also a lot of stuff that hangs off of the metadata, particularly value witnesses, that could be lazified with it.

For sure.  Although I suspect we could get a lot of that back by structurally uniquing value witnesses!  Every generic type which just wraps a collection has basically exactly the same representation...


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-dev/attachments/20170728/a670e99a/attachment.html>

More information about the swift-dev mailing list