Even though memory allocations are not always easy to spot, they are fairly expensive due to overhead and garbage collector load. For a seemingly innocent function
with help of -gcflags="-m"
flag, Go’s compiler points out that
x escapes to heap
so runtime.convT64
function is used to convert x
into a pointer
00007 (6) CALL runtime.convT64(SB)
which is implemented in runtime.iface.go:
The most interesting bit is x = unsafe.Pointer(&staticuint64s[val])
part, which returns a pointer from a preallocated staticuint64s
pool of ints between 0
and 255
:
It's a fairly cheap way to trade a little memory for allocation reduction and is used in many other managed languages like Java. To make it even more useful, runtime reuses the same cache also for convT16
and convT32
functions.
It's such a useful technique that it's also used for dynamic caches, also known as object pools, but the extra flexibility of not being limited by a fairly small range of values comes at a cost of synchronization, so it's important to measure this overhead when evaluating object pools.
In summary, consider using static preallocated caches to reduce allocation count and improve performance.