On Wed, 2013-10-16 at 02:48 +0200, Guiu Rocafort wrote:
Hi ! I've been making some progress in the automated testing code. I've simplified much and tried to keep it simple. ( http://bazaar.launchpad.net/~neandertalspeople/+junk/inkscape-testsuite/file... )
I've been having much more problems trying to get the image comparison work well. Because I removed the text from the tests, I didn't have reference images to use. So what I did is the following:
I ran the tests with Inskscape and compared manually the outputs with the reference images ( with text ). If visually they look the same, I just copied the Inkscape rendered image as the reference for the tests. This way I could get reference images for the tests that passed. The fail test remained without correct reference images so they all failed. Then I tried the tests with the trunk inkscape ( i made the reference images using inkscape 0.48 ) and found out that many of the pass tests failed ( all of them except 2 ) even if they have been rendered using inkscape.
Trunk uses a new renderer based on Cairo. It is not surprising that the images from 0.48 don't match trunk on a pixel-by-pixel basis. (BTW, the SVG spec says that there is a one pixel tolerance in rendering SVGs.) Since automated testing is mostly for checking for regressions, I would simply make the reference images using trunk (comparing them with the PNGs from W3C to determine pass/fail).
So I've started to notice that this is going to be more difficult than I initially thought. I might try to use the perceptualdiff to try to differentiate if the changes are small enough to consider the test pass. A more developed idea would consist in measuring the "density of changes" in pixels in areas of the image.
Any thoughts ? Tavmjong ? jasper ?