"In place" algorithms are typically faster. But one has to avoid making code changes that accidentally violate data dependencies in loops and algorithms overall, e.g. when the right-hand side suddenly consumes an updated value as a result of the code optimization. But in a few cases we can make this "mistake" on purpose to get more accurate algorithms! For example, Gauss Seidel iterations (reuse updated values that are converging) versus Jacobi iterations (no reuse).
Funny that reusing memory appears to be controversial to some folks, judging from LinkedIn comments. Perhaps functional programming comes to mind, in which nothing is reused (at least not in the abstract!) But efficient imperative computing with numerical algorithms almost always entails implementation that reuse memory. Also local reuse is important, e.g. block-wise matrix algorithms optimize cache by improving spatial and temporal locality. Code readability is not the ultimate goal, since the algorithms are will understood and documented. Documentation is key.
personally I try to have in-place version whenever possible, since it's easy to provide a non-inplace version by just copying input and passing it to in-place one.
> Funny that reusing memory appears to be controversial to some folks, judging from LinkedIn comments.
It's not that controversial - it would be great if compilers could do this automatically based on escape analysis, but we live in a real world and sometimes it means being pragmatic and helping compilers :)
> Also local reuse is important, e.g. block-wise matrix algorithms optimize cache by improving spatial and temporal locality. Code readability is not the ultimate goal, since the algorithms are will understood and documented. Documentation is key.
yeah, loop tiling and other matrix shenanigans is just basic mechanical sympathy for me and I hope that at some point compilers would be able to do all of this for us, but in the meantime I don't really see a readability problem with this :)
"In place" algorithms are typically faster. But one has to avoid making code changes that accidentally violate data dependencies in loops and algorithms overall, e.g. when the right-hand side suddenly consumes an updated value as a result of the code optimization. But in a few cases we can make this "mistake" on purpose to get more accurate algorithms! For example, Gauss Seidel iterations (reuse updated values that are converging) versus Jacobi iterations (no reuse).
Funny that reusing memory appears to be controversial to some folks, judging from LinkedIn comments. Perhaps functional programming comes to mind, in which nothing is reused (at least not in the abstract!) But efficient imperative computing with numerical algorithms almost always entails implementation that reuse memory. Also local reuse is important, e.g. block-wise matrix algorithms optimize cache by improving spatial and temporal locality. Code readability is not the ultimate goal, since the algorithms are will understood and documented. Documentation is key.
personally I try to have in-place version whenever possible, since it's easy to provide a non-inplace version by just copying input and passing it to in-place one.
> Funny that reusing memory appears to be controversial to some folks, judging from LinkedIn comments.
It's not that controversial - it would be great if compilers could do this automatically based on escape analysis, but we live in a real world and sometimes it means being pragmatic and helping compilers :)
> Also local reuse is important, e.g. block-wise matrix algorithms optimize cache by improving spatial and temporal locality. Code readability is not the ultimate goal, since the algorithms are will understood and documented. Documentation is key.
yeah, loop tiling and other matrix shenanigans is just basic mechanical sympathy for me and I hope that at some point compilers would be able to do all of this for us, but in the meantime I don't really see a readability problem with this :)