On Fri, Mar 21, 2008 at 01:19:28PM +0100, Jasper van de Gronde wrote:
I'm interested in improving Inkscape's test framework for the GSoC. I've read the GSoC related documentation. I've also read all the information on testing on the Wiki that I could find and had a look at some of the current unit tests in SVN.
Unfortunately I don't have a Linux system set up at the moment, so I can't actually execute 'make check'. So far I've come up with the following draft plan:
- Make sure the existing tests work on Win32 (and MacOS X).
- Set up system to rerun tests periodically/when needed.
- *Possibly* also try generating some historical data (where possible).
- Implement new tests for (at least):
- nr-compose (I already did this locally, at least partially)
- livarot (parts of)
- sp_svg_... (transform_read/write, read/write_path, etc.)
- Integrate (existing) SVG conformance tests so that they are also rerun. I will attempt to partially automate this (as far as I understand the current tests are evaluated manually) by storing result images and only asking for human judgement when an image changes. I'm also thinking of a few more advanced ways of automating some of these tests.
- Add performance information by timing tests (as well as rendering of images, for the SVG conformance tests for example).
In short my intention is to create a system which (continuously) keeps track of the status of unit tests as well as SVG conformance and performance of Inkscape. Test results will be accessable online.
Is this more or less what was originally intended? In any case, I would appreciate any suggestions and/or information you might have.
Actually the original intent is more about gaining a suite of useful tests, than setting up the test framework. We already have most of the infrastructure for running tests via cxxtest and make check; certainly it could be better/fancier, but that's far secondary to having more valuable tests in the first place. (Having worked on several test frameworks myself, I know how much more attractive they are to work on than "just" tests, but the ultimate goal is finding and fixing bugs, and for that we need tests.)
So, given that the summer moves along fast, I would encourage you to focus on #1 and #4, which are the things you could add the most unique value. The others are of course worthwhile, but other folks have set those up in the past and I expect could again in the future; so far the limiting factor has been our scarcity of tests, which reduce the usefulness of frameworks to begin with. So if you focused particularly on #1 and #4, I think it might stimulate the rest to fall into place, and would make the most valuable use of your time.
livarot would probably not be worth the while to instrument with tests as it's scheduled for removal anyway.
With testing, there are some general rules of what things are best to make tests for:
* Code that is under active development according to svn logs (bugs breed in new code)
* Code that integrates two different chunks of code (bugs hide out between the cracks)
* Code in which a lot of other bugs have already been found (bugs tend to congregate together). Check recent bug tracker activity for where bugs are being found.
* Code that isn't documented (bugs live in dark areas)
* Code that is executed a lot (these bugs hurt the most)
* Code that is rarely ever executed (these bugs are fat and lazy, and easy to find)
* Code that when you ask a developer about it, they wince and wish to change the subject. ;-)
Hopefully those heuristics can be used to prioritize which sections of code need tests written the most.
The types of tests needed, as Bulia and others mentioned, include both unit tests (via cxxtest), and high level functional tests (such as via the command line verbs). The former is more suited for running via make check; the latter could be hooked into make check but probably would be more useful to be a stand alone suite of test scripts.
Bryce