The C++ Standard Library provides a wide range of features that can help you write more efficient, reliable, and maintainable code. It is also portable, meaning that your code will work on any platform that supports C++. This can save you time and money, as you won't need to write different versions of your code for different platforms. And who doesn’t like their code to get faster with new version of the compiler?
So it sounds like there should be no reason to implement what’s already available in std, right? To answer this question, let’s solve a trivial warmup coding problem - implement a function that takes an integer and returns the sum of its digits.
To solve it, we can accumulate repeatedly chopped off last digits until the number becomes 0. We can use std::div that conveniently returns a pair containing the number with its last digit chopped off and the digit that should be added to the sum.
Previously, we’d have to write something like
which doesn’t look too bad, but seems to perform separate % and / operations instead of a single std::div that should be able to leverage a single assembly instruction to get both parts.
But as soon as I saw the generated assembly
I started to worry about the function call overhead, which is obviously missing in the manually written version
so to remove any doubts we can use the following benchmark
Results leave no doubts
with manual division being 6.7X faster than the version using std::div.
The reason is fairly obvious - because std::div is not inlined, compiler is unable to perform a trick to replace division with multiplication and some bit trickery. Combined with extra function call overhead, this adds up to the 6.7X slowdown.
Despite all of the above, I strongly suggest using algorithms from standard library for the reasons mentioned at the beginning of the article and only if one of such usages becomes a bottleneck, check if missed inline opportunity can be remediated.
Bummer 😕