On 2013-10-11 12:12, Guiu Rocafort wrote:
...Ok, sorry for sending that much messages, I just wanted to make a little summary about the work that would be needed. 1. Get the SVG 1.1 Second edition test files and extract the text. Jasper, in the link [1] you sent the files are from the Second edition of the test suite or the first one ? Are they up to date ?
Probably not (it was years ago). But it should not be terribly difficult to regenerate them from the new tests (textclean.py can remove text and set links to be local).
btw, I do actually think that removing the text is good, because if there is a problem with the text rendering code, it would affect a lot of tests. Removing the text from the tests we can isolate better what we are testing with each file.
That's the idea, but for tests that explicitly test text rendering it is of course another story (I think I simply left those out initially).
- Then, we would need to manually separate the tests in pass/fail.
Assuming the reference images are up-to-date: if you run the tests once you should get a list of results in testresults.txt. A little scripting or sed magic (didn't test, but I mean something along these lines: grep Pass testresults.txt | sed 's/([^:]*):.*/\1/' | xargs -I{} svn mv {} pass/{}) should then allow you to put them in two different directories, or simply give you two lists of tests to run. Alternatively, you could easily hack the test script to output test names to different files depending on the result (so generating fail.txt and pass.txt for example), in the section near the end where the actual testing occurs.
- Implement the code as explained in the message I sent before. Then
add it to the tests in the current inkscape trunk to be executed every time someone makes a commit.
That would be great!
- Then a way to make this information public would need to be done.
But we can discuss this while the first points are made.
Sure. Whatever you do, I would not recommend the GUI I made (turned out to be incredibly inefficient, and essentially more than you really need, especially if the tests run automatically, as you then have much finer-grained information anyway).