<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Jun 19, 2017, at 5:43 PM, David Sweeris <<a href="mailto:davesweeris@mac.com" class="">davesweeris@mac.com</a>> wrote:</div><div class=""><div style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class=""><div class=""><br class="">Sent from my iPhone</div>On Jun 19, 2017, at 13:44, John McCall via swift-evolution <<a href="mailto:swift-evolution@swift.org" class="">swift-evolution@swift.org</a>> wrote:<br class=""><br class=""></div><blockquote type="cite" style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px;" class=""><div class=""><div class=""><blockquote type="cite" class=""><div class="">On Jun 19, 2017, at 1:58 PM, Stephen Canon via swift-evolution <<a href="mailto:swift-evolution@swift.org" class="">swift-evolution@swift.org</a>> wrote:</div><div class=""><div class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"><blockquote type="cite" class="">On Jun 19, 2017, at 11:46 AM, Ted F.A. van Gaalen via swift-evolution <<a href="mailto:swift-evolution@swift.org" class="">swift-evolution@swift.org</a>> wrote:<br class=""></blockquote><div class=""><blockquote type="cite" class=""><div class=""><div class="" style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;"><br class="Apple-interchange-newline">var result: Float = 0.0</div><div class="" style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;">result = float * integer * uint8 + double </div><div class="" style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;">// here, all operands should be implicitly promoted to Double before the complete expression evaluation.</div></div></blockquote></div><br class=""><div class="">You would have this produce different results than:</div><div class=""><br class=""></div><div class=""><span class="Apple-tab-span" style="white-space: pre;">        </span>let temp = float * integer * uint8</div><div class=""><span class="Apple-tab-span" style="white-space: pre;">        </span>result = temp + double</div><div class=""><br class=""></div><div class="">That would be extremely surprising to many unsuspecting users.</div><div class=""><br class=""></div><div class="">Don’t get me wrong; I *really want* implicit promotions (I proposed one scheme for them way back when Swift was first unveiled publicly).</div></div></div></blockquote><div class=""><br class=""></div></div>I don't! At least not for floating point. It is important for both reliable behavior and performance that programmers understand and minimize the conversions they do between different floating-point types.</div></blockquote><br style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class=""><div style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class="">How expensive is it?</div></div></blockquote><br class=""></div><div>On most contemporary hardware, it’s comparable to a floating-point add or multiply. On current generation Intel, it’s actually a little bit more expensive than that. Not catastrophic, but expensive enough that you are throwing away half or more of your performance if you incur spurious conversions on every operation.</div><div><br class=""></div><div>This is really common in C and C++ where a naked floating-point literal like 1.2 is double:</div><div><br class=""></div><div><span class="Apple-tab-span" style="white-space:pre">        </span>float x;</div><div><span class="Apple-tab-span" style="white-space:pre">        </span>x *= 1.2;</div><div><br class=""></div><div>Instead of a bare multiplication (current generation x86 hardware: 1 µop and 4 cycles latency) this produces a convert-to-double, multiplication, and convert-to-float (5 µops and 14 cycles latency per Agner Fog).</div><div><br class=""></div><div>–Steve</div></body></html>