Hi ! I've been making some progress in the automated testing code. I've simplified much and tried to keep it simple. ( http://bazaar.launchpad.net/~neandertalspeople/+junk/inkscape-testsuite/file... )
I've been having much more problems trying to get the image comparison work well. Because I removed the text from the tests, I didn't have reference images to use. So what I did is the following:
I ran the tests with Inskscape and compared manually the outputs with the reference images ( with text ). If visually they look the same, I just copied the Inkscape rendered image as the reference for the tests. This way I could get reference images for the tests that passed. The fail test remained without correct reference images so they all failed. Then I tried the tests with the trunk inkscape ( i made the reference images using inkscape 0.48 ) and found out that many of the pass tests failed ( all of them except 2 ) even if they have been rendered using inkscape.
So I've started to notice that this is going to be more difficult than I initially thought. I might try to use the perceptualdiff to try to differentiate if the changes are small enough to consider the test pass. A more developed idea would consist in measuring the "density of changes" in pixels in areas of the image.
Any thoughts ? Tavmjong ? jasper ?
Guiu
2013/10/11 Jasper van de Gronde <th.v.d.gronde@...528...>:
On 2013-10-11 12:12, Guiu Rocafort wrote:
...Ok, sorry for sending that much messages, I just wanted to make a little summary about the work that would be needed. 1. Get the SVG 1.1 Second edition test files and extract the text. Jasper, in the link [1] you sent the files are from the Second edition of the test suite or the first one ? Are they up to date ?
Probably not (it was years ago). But it should not be terribly difficult to regenerate them from the new tests (textclean.py can remove text and set links to be local).
btw, I do actually think that removing the text is good, because if there is a problem with the text rendering code, it would affect a lot of tests. Removing the text from the tests we can isolate better what we are testing with each file.
That's the idea, but for tests that explicitly test text rendering it is of course another story (I think I simply left those out initially).
- Then, we would need to manually separate the tests in pass/fail.
Assuming the reference images are up-to-date: if you run the tests once you should get a list of results in testresults.txt. A little scripting or sed magic (didn't test, but I mean something along these lines: grep Pass testresults.txt | sed 's/([^:]*):.*/\1/' | xargs -I{} svn mv {} pass/{}) should then allow you to put them in two different directories, or simply give you two lists of tests to run. Alternatively, you could easily hack the test script to output test names to different files depending on the result (so generating fail.txt and pass.txt for example), in the section near the end where the actual testing occurs.
- Implement the code as explained in the message I sent before. Then
add it to the tests in the current inkscape trunk to be executed every time someone makes a commit.
That would be great!
- Then a way to make this information public would need to be done.
But we can discuss this while the first points are made.
Sure. Whatever you do, I would not recommend the GUI I made (turned out to be incredibly inefficient, and essentially more than you really need, especially if the tests run automatically, as you then have much finer-grained information anyway).
October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register > http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clk... _______________________________________________ Inkscape-devel mailing list Inkscape-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/inkscape-devel