Rendering(/verb) tests - comparing output files to reference files

To elaborate on what I'm working on, I currently have a tool which can execute Inkscape with some arbitrary command-line and (given what the output file is) compare the output to a set of reference files. If it finds a match it assigns the test the result belonging to that reference file, it can't find a match the test is marked "New". (It can also deal with crashes and so fort. And it can use a tool like perceptualdiff to compare the files.)
Currently there are two short-comings: 1. Obviously it needs a bit more automation (you don't want to enter it all manually). Locally I'm doing this using a batch file. Since this isn't portable I'm thinking of either modifying buildtool to be able to do this, or more or less porting my batch file to Python. 2. Comparing the generated files completely automatically to the W3C reference files is practically impossible, because of font differences. This also goes for sharing reference files between different computers.
I would definitely welcome some ideas for solving 2. Currently I'm thinking of a combined strategy. First of all the problem can be mitigated somewhat by using --export-id=test-body-content --export-id-only (this makes sure the revision string and the border aren't exported). This doesn't make it easier to compare the output to existing reference files, but does make it easier to share reference files between computers. Going a bit further it would be possible to eliminate all the unnecessary labels from test files.
Secondly, there will always be files with text. To prevent those from being a problem I could do roughly two things: - Make sure it is relatively easy to judge the files on your own PC (using some GUI tool that lets you judge the files by clicking on a button for example). - Make some on-line tool that runs the tests and publishes the results. Both have their (dis)advantages.
What would people prefer? And what are the possibilities for option 2? (That is, would it be possible to do this on the Inkscape site somewhere? Does SF have facilities for running periodic tests that I could use?)

-----Original Message----- From: inkscape-devel-bounces@lists.sourceforge.net [mailto:inkscape-devel-bounces@lists.sourceforge.net] On Behalf Of Jasper van de Gronde Sent: vrijdag 11 juli 2008 13:33 To: inkscape Subject: [Inkscape-devel] Rendering(/verb) tests - comparing output files toreference files
To elaborate on what I'm working on, I currently have a tool which can execute Inkscape with some arbitrary command-line and (given what the output file is) compare the output to a set of reference files. If it finds a match it assigns the test the result belonging to that reference file, it can't find a match the test is marked "New". (It can also deal with crashes and so fort. And it can use a tool like perceptualdiff to compare the files.)
Currently there are two short-comings:
- Obviously it needs a bit more automation (you don't want
to enter it all manually). Locally I'm doing this using a batch file. Since this isn't portable I'm thinking of either modifying buildtool to be able to do this, or more or less porting my batch file to Python.
Choosing between buildtool or Python, I think it is best to make a Python script. Then you can always execute that script from within buildtool.
- Comparing the generated files completely automatically
to the W3C reference files is practically impossible, because of font differences. This also goes for sharing reference files between different computers.
I would definitely welcome some ideas for solving 2. Currently I'm thinking of a combined strategy. First of all the problem can be mitigated somewhat by using --export-id=test-body-content --export-id-only (this makes sure the revision string and the border aren't exported). This doesn't make it easier to compare the output to existing reference files, but does make it easier to share reference files between computers. Going a bit further it would be possible to eliminate all the unnecessary labels from test files.
Secondly, there will always be files with text. To prevent those from being a problem I could do roughly two things:
- Make sure it is relatively easy to judge the files on
your own PC (using some GUI tool that lets you judge the files by clicking on a button for example).
- Make some on-line tool that runs the tests and publishes
the results. Both have their (dis)advantages.
What would people prefer? And what are the possibilities for option 2? (That is, would it be possible to do this on the Inkscape site somewhere? Does SF have facilities for running periodic tests that I could use?)
I like both options. It is good to have something that runs on a developer's own PC so its easy to check whether the local changes broke anything. I don't know about fonts though. And it is also good to have some table on the internet with what goes right and what goes wrong (and when it goes wrong, show the png's so people can compare what exactly went wrong). One step further is to have other programs (batik, adobe, firefox, ...) in that table as well! Even more steps further: export to pdf and check whether the result is visually the same as SVG (I know already one bug with markers that shows up in pdf but not in png); I think many users are interested in what SVG features transfer well to PDF.
Great work! Johan
participants (2)
-
unknown@example.com
-
Jasper van de Gronde