On Thu, May 15, 2014, at 03:53 PM, Krzysztof KosiĆski wrote:
2014-05-15 23:56 GMT+02:00 Johan Engelen <jbc.engelen@...2592...>:
Hi all, Why do we build with -O2 instead of O3?
Because -O3 compilation takes a lot more time and memory than -O2, yet the improvement is minor in most cases.
It may make sense for compiling releases, but for regular development -O2 is enough. So unless you can show that under some specific circumstances Inkscape performs much better with -O3, I think the default should stay at -O2.
Krzysztof summarized it pretty well here, however I did want to highlight the need for actual performance numbers. We generally would need to check both interactive and command-line performance. Then the type of data file or files involved can also be a big factor. Then finally some edge cases (such as zoomed far in, multiple windows for a single doc, etc.) need to be covered.
For my dev builds I have to use -O0 (to turn off optimizations) otherwise stepping through debuggers can be quite misleading. Stack traces from crashes can also be affected.
The -O2 level is pretty common to target, however some products actually choose to ship with -O0 instead. Some very high load/performance server systems I'd worked on were like that. In general one can often architect code nicely enough to not require compiler optimizations and can benefit from clear debugging data when the rare crash or forced dump occurs.
One other factor is in code clarity. Sometimes it is easy to get better performance not bumping up the optimization level, but instead by "de-optimizing" the source. That is, if one removes a coder's attempts at micro-optimizations it often leads to code that looks slower but, since it is clearer, will actually be much faster as a good compiler can do more with it.
But remember, the main take-away is that we need to get performance data from live end-to-end full test scenarios. And keep in mind that studies have shown developer intuition on performance to be wrong 80%+ of the time.