[swift-dev] Making the sign of NaNs unspecified to enable enum layout optimization
Joe Groff
jgroff at apple.com
Thu Oct 20 12:55:45 CDT 2016
> On Oct 20, 2016, at 10:35 AM, Jordan Rose <jordan_rose at apple.com> wrote:
>
>>
>> On Oct 20, 2016, at 10:04, Joe Groff <jgroff at apple.com <mailto:jgroff at apple.com>> wrote:
>>
>>
>>> On Oct 20, 2016, at 9:42 AM, Jordan Rose <jordan_rose at apple.com <mailto:jordan_rose at apple.com>> wrote:
>>>
>>> Some disconnected thoughts:
>>>
>>> - “Does not interpret” does not mean “does not preserve”. The very next sentence in the standard is "Note, however, that operations on bit strings—copy, negate, abs, copySign—specify the sign bit of a NaN result, sometimes based upon the sign bit of a NaN operand."
>>>
>>> - If we were to claim a class of NaNs, I would pick signalling NaNs rather than positive or negative ones. AFAIK most NaN-embedding tricks avoid signalling NaNs because they can, well, signal, even though (again AFAIK) most modern systems don’t bother.
>>
>> Claiming sNaNs would be unfortunate since "signaling" is about the only semantically distinct bit NaNs normally have, and I think we should minimize interference with users who are taking advantage of signaling or NaN payloads for their own means. (OTOH, on some platforms like x87 it's already practically impossible to preserve the signaling bit, since even loads will immediately raise the exception and quiet the NaN, and there would be some nice safety benefits to getting a trap early if a Float? is bitcast to a Float without being formally checked first.)
>
> Right, that’s sort of my point. If you’re using NaN payloads for non-float-related information, you shouldn’t be using the bit that’s part of the floating-point standard. But I could also see plenty of people saying “we’re not going to waste a whole bit” and not bothering to distinguish it from the rest.
>
> At the same time, I can certainly see people saying “hey, an extra bit” about the sign bit. If you’re using NaN payloads, you probably are going to check for that before performing any operations on the NaN, rather than relying on nil-swallowing NaN-propagation doing what your program requires.
>
>>>
>>> - I don’t feel like we have a coherent story here. We know that APIs taking “Double” or “Float” can represent any bit pattern. The last plan I heard for floating-point comparison treats NaNs as unordered relative to one another, even in a total order comparison. (I maintain that this is unsound.) And this proposal would treat some or all NaNs as invalid. I feel like we need to pick one approach here.
>>
>> I'm not saying that they'd be invalid, only that the language doesn't guarantee to preserve these representations exactly. That seems orthogonal to the comparison issue—whatever rule we come up with for float ordering, all NaNs ought to be treated uniformly by that rule.
>
> I don’t think I agree with either of those sentences. I’d really like the story to be either “we treat different NaN bit strings as distinct” or “there are no meaningful distinctions between NaNs in Swift (except maybe sNaN vs. qNaN); if you want anything more you need to use Int as your storage type”. Each of those has natural consequences for me concerning both extra inhabitants and total ordering.
JavaScript goes as far as saying that there's semantically only one NaN value. We could reasonably do the same (though I think there's value in preserving 'sNaN' and 'qNaN'), since hardware and libm already make basically no portable guarantees about what NaN representation you get. That might make it less morally wrong to sometimes treat all NaNs uniformly and sometimes preserve them.
-Joe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-dev/attachments/20161020/8b341394/attachment.html>
More information about the swift-dev
mailing list