On Sat, Jun 10, 2017 at 03:28:02PM +0200, Michael Soegtrop via Inkscape-devel wrote:
Dear Inkscape Team,
since gitlib is up and running, the question arizes if we want to use submodules, and if so which folder structure should we use.
I want to add address / memory sanitizer support and use https://github.com/arsenm/sanitizers-cmake. It is just a few files and I could copy them verbatim, but it would make sense to add it as a submodule.
IMHO, in the interest of keeping things simple while people are climbing learning curves for git, let's hold off on remotes and submodules for now.
Also, in general, I know we've been apt to pull stuff into the main codebase but going forward as much as possible let's try to keep external dependencies completely external, with detection logic in cmake. The more we can slim down the codebase the easier it'll be for new folks to learn and less work it'll be for busy old timers to maintain over the long run.
Regarding testing tools, improving code quality for Inkscape is extremely valuable work, so the more the merrier. We just need to be cognizant of setting them up as default off, with the appropriate cmake stuff to detect the test dependency and give the user a config option to switch them on.
In case you don't know what address/memory sanitizer is: it creates instrumented builds (about 2x slower) which check for all sorts of invalid memory usage, like access to freed memory or reading uninitialized variables. The cmake scripts above enable this by a cmake define, similar to switching between debug and release builds. In my experience it saves a lot of time in tracking down obscure bugs in complex C++ code. I right now have a debug/release deviation I want to track down.
We had set up Coverity at one point, which also checked similar stuff. I think that's probably defunct currently, though. We didn't really have enough people following up on problems for it to work.
The tool almost doesn't matter - the real value is the people who work on following up on issues the testing finds. The first run can typically identify a LOT of work items that'll take a lot of labor to work through; ongoing runs can be less work but still need folks to follow up on stuff, and unfortunately a lot of regular developers will be too busy with other stuff to bother so it helps to have dedicated people familiar with the testing tool.
So, as a rule of thumb, you'd want to see at least, say, 3 people committing to watch and follow up on issues found by a particular tool, if adoption of the tool is going to be invasive in terms of infrastructure requirements or adjustments to development processes.
Bryce