A few days ago I was reading source code that dealing with sampling calculation. To spare you the details, let’s say it had below logic:
double rate = static_cast<double>(getSampleRateInt()); // convert to double to make multiplication result double
if (rate == 0) {
// do something and return
}
double newRate = someFlag ? rate * 2 : rate * rateFactorInt;
// more code with return
I immediately got curious about the comment next to rate definition - why do we have to convert the integer rate to double so early instead of doing it only when it’s needed? In fact, getSampleRateInt()
returns only 0 or 1 and 0s to be a significant fraction, which means that we often don’t even need to perform any multiplication. On top of that, I got to wonder - how efficient is the comparison rate == 0
? Let’s find out and look under the hood. For the snippet below
we get the following assembly
So the difference is certainly there, but what does it mean in terms of runtime overhead? Benchmark results suggest 1.3X slower runtime for double comparisons:
Looks like the good old YAGNI principle holds in performance space as well and we should delay work to a point where it’s actually needed. Also, be mindful of the cost of dealing with doubles - it may be more significant than we’d guess.
so did you file a PR? ;)