On Fri, Mar 21, 2008 at 10:13:32PM +0100, Jasper van de Gronde wrote:
bulia byak wrote:
On Fri, Mar 21, 2008 at 9:19 AM, Jasper van de Gronde <th.v.d.gronde@...528...> wrote:
Unfortunately I don't have a Linux system set up at the moment, so I can't actually execute 'make check'. So far I've come up with the following draft plan:
Great plan, and long overdue! :)
I think the important thing is to test both from bottom up (unit tests of classes and functions) and from top down (scripted tests and performance measurements of the entire program doing various tasks on various files).
It's a good idea to also make use of the command line access to verbs, this would indeed allow for some very interesting testing possibilities. The main problem would be how to test it. I would imagine comparing the SVG result to a reference output would be most interesting (after all, just looking good usually isn't good enough when executing verbs). But to compare SVGs it would be necessary to parse them and write comparison routines for different elements, which might be a bit much to do as part of this project.
As a first order, a simple diff | wc -l could be used to estimate the size of change. md5sum of the svg file can be used to test output identity. There's probably also tools for extracting the elements of an XML file, and a simple listing of the SVG elements could be useful for comparisons (e.g., after adding a group, check to see if a new group object is present).
Alternatively the resulting bitmaps could be compared, this would be a lot less work (it would be needed for the SVG conformance tests anyway). And it would still capture bugs resulting in visible problems.
The Cairo test suite has a simple bitmap diff tool that could be reused here. There are also other similar tools (I used one once from a video camera motion sensor project I found on SourceForge.)
Combining these methods I hope to minimize the amount of human intervention needed. If anyone has any other ideas I'd be glad to hear them.
I have a couple thousand misc SVG files from various sources, and before a release I ran a very simple script that would load each file and export it to PNG. Even without any bitmap diffing, this found at least a couple bugs.
I get the feeling there is no need to worry about getting enough test images :)
Along these lines of using bitmap diff tools, you could create a suite of regression tests using past problematic .svg's. This could be done by scouring launchpad for Fixed bugs with attachments that are not patches (you'd need to write a spider script to do this, or use a tool like bugbuddy, since the advanced search doesn't allow for this), downloading the svg's and prepending the bug ID to their filenames.
Collect those files, run them through current inkscape to produce png's, and store those as the reference images, then make a simple test script that will run through the .svg's and compare output to the reference images, to look for future regressions. Toss that into a new SVN repo, and we'd suddenly have a big regression test to run automatically.
Bryce