
Hi all,
here is my workaround for the testing strategy of Inkscape. (sorry it's a copy paste of my doc in the mail, I can forward you the rtf document if you want)(easier to read) I'm currently listing the "specifications", in order to have a reference of features to be tested. The main things that should be defined if you agree with this : - establish a test process - establish communication/coordination tools (wiki / specific web page /mail list ?) - prepare/review specifications,test tools and test cases - begin tests
Regards,
Matiphas
Preliminary note: This is a preliminary/draft document for defining a test strategy for Inkscape projet (http://www.inkscape.org) It is currently in Rich Text Format (rtf), for easy exchange purposes. -> Don't know yet how to manage this document (Dookbook ? Open Office)(specific web page ? wiki ? cvs ?) -> Any way this document will split for the following reasons : - separate specifications of software, test strategy, test traceability, test items, test results... - ease of maintenance - separation of features under test At the moment, it is mainly architectured according to the categories of bug tracking under sourceforge tool for Inkscape (see http://sourceforge.net/tracker/?group_id=93438)
Reference: Inkscape testing framework: http://www.inkscape.org/cgi-bin/wiki.pl?TestingFramework (Missing) Inscape "specifications" (not really a pure specification of the software, this would be incompatible with the development process of Inscape, but somewhat a hierarchised and detailed list of features to be tested)
Introduction: Why testing ? Just to ensure quality of the software (does it suits user needs ?) and even better the quality of the code/design
What are the important concepts of testing : - measure: metrics (specifications and comparison to those specifications) are needed. This is important for the homogeneity of the tests and the correct interpretation of their results - coverage: what are the areas (design / code / features / architectures) covered by the tests. Stronly linked to the efficiency of the testing process - repeatability/reproducibility. The tests must provide coherent results, in order to to be correctly interpreted. Automatisation of the tests and well documented references (vocabulary, procedures...) help a lot. - periodicity: when shall the test be executed. This depends on the criticity, probability of bug and level of test.
What is required for having efficient tests : - KISS : tests cases should be very simple in order to be easy to understand, easy to implement, and the results easy to interpret. - well maintained and documented
Acronyms and abbreviations: (not used yet, but might become usefull) NA : Not Applicable TBD : To Be Done
Levels of tests: Unit testing: at code level, providing tools/script in order to test classes/functions... Mainly depends on coding team Feature testing: the biggest part, and the largest amount of work. After definition of the features to be tested, establish precise test cases and execute them. Can be partially automated (see in this document) Non regression testing: the ability to check nothing has been broken in the code after important/critical development. Can be covered by feature and unit testing, but a non regression analysis after important/critical development is a great help. Plus after finding some new bugs covered by the test plan, prepare new test itemand ensure it covers properly the new bug.
Reference tools: Purpose : define a list of reference tool against which the software should be compared. (especially for issues around import/export / rendering / printing / maybe UI) To be refined. At the moment : take a look at http://www.inkscape.org/cgi-bin/wiki.pl?OtherProjects SVG software Vector software
Tests: Build Linux (Debian / Gentoo / Slackware / Mandrake / Fedora / Suse / Others) Windows (98 / 2000 / XP) MacOS (X) BSD (Free / Open) Other Unices (Solaris / Irix)
Notice : should be automated (example see : http://public.kitware.com/Dart/HTML/Index.shtml) List of tasks : - establish a list of builders and link it with the architectures they can manage - establish a kind of "test contract" defining the tools to be used (compiler, dependancies, test report tool...) and the periodicity of the tests
Import Bitmap Vector
Export Bitmap Vector
Printing
Node editing
Shape editing
Transformations
Color Management
Internationalization Main tasks : - list the internationalization that are actively maintained - starting with http://www.inkscape.org/cgi-bin/wiki.pl?InkscapeTerminology, prepare a good dictionnary covering the maintained languages (can be very helpful for the reading of test procedures), plus the vocabulary of the application (messages, menus, etc etc) - establish a list of users with the languages they can check and define a kind of "test contract"
UI
SVG compliance See in 'specifications of software' plus SVG specifications The main idea is to provide a large repository of reference "SVG files" and "editing sequences". then two types of actions : - open reference files with inkscape, export it as bitmap and compare with a reference bitmap provided by a reference application (Batik ?) - open Inkscape, launch reference editing sequence, export result as bitmap and compare with a reference bitmap provided by a reference application on the same sequence (Inkscape itself for non regression, Batik ?) Previous sequences can be combined in one in order to save time, but then analysis of results can become more complex.
Performance "Speed benchmarking" (can be automated) - provide a list of criterias to be tested (time to launch, time to open complex docs, time to save complex docs, speed of scrolling, speed of application of different types of transformations) - provide a benchmark tool - establish a list of users with the architecture on which they use the software, then define a kind of "test contract" defining the tools to be used and the periodicity of the tests
Verification of robustness - testing memory leaks (not a specialist of this : I need help) - provide a shaker tool (this tool would launch Inkscape and then start a predefined number of pseudo-random commands -logging each action). The goal is to detect crashes and analyse their conditions with the logs - testing against severe conditions (corrupted files / out of memory / out of disk space ...)

On Thu, 15 Jul 2004, Gazal, Geraud (MED) wrote:
Hi all,
here is my workaround for the testing strategy of Inkscape. (sorry it's a copy paste of my doc in the mail, I can forward you the rtf document if you want)(easier to read)
Sorry it's taken a while to get to this; as you likely guessed, we've been tied up thinking about the release lately. I got a bit behind in my email... ;-)
I'm currently listing the "specifications", in order to have a reference of features to be tested. The main things that should be defined if you agree with this :
- establish a test process
This looks like an excellent start.
Preliminary note: This is a preliminary/draft document for defining a test strategy for Inkscape projet (http://www.inkscape.org) It is currently in Rich Text Format (rtf), for easy exchange purposes. -> Don't know yet how to manage this document (Dookbook ? Open Office)(specific web page ? wiki ? cvs ?)
For file format, let's try to keep it to plain text. Then it can be pasted and commented on in email without trouble.
Either Wiki or CVS would be suitable places for storing it. There are roughly equal pros and cons both ways. Wiki would be more widely accessible to more people, but CVS would allow a little more flexibility for manipulating the document (e.g., grep, txt2pdf, etc.)
-> Any way this document will split for the following reasons : - separate specifications of software, test strategy, test traceability, test items, test results... - ease of maintenance - separation of features under test At the moment, it is mainly architectured according to the categories of bug tracking under sourceforge tool for Inkscape (see http://sourceforge.net/tracker/?group_id=93438)
It may be worthwhile if we have better ideas for the categorization, to alter the bug tracker to match. Those categories were decided when we first started the project, and probably need updated to match what we people actually support bugs about. If anyone has ideas for improvements let me know and I'll apply them.
Levels of tests: Unit testing: at code level, providing tools/script in order to test classes/functions... Mainly depends on coding team
Feature testing: the biggest part, and the largest amount of work. After definition of the features to be tested, establish precise test cases and execute them. Can be partially automated (see in this document)
A subcategory of this is 'Compliance testing', which validates the features against a spec - in our case the W3C SVG spec. There may be other specs we can test compliance against, although some (like the GNOME Human Interface Guidelines) will generally not be automatable.
Non regression testing: the ability to check nothing has been broken in the code after important/critical development. Can be covered by feature and unit testing, but a non regression analysis after important/critical development is a great help. Plus after finding some new bugs covered by the test plan, prepare new test itemand ensure it covers properly the new bug.
There is also performance testing, which can be done in a variety of ways. For example, startup timing, function-level profiling, etc.
Also, we should include memory error testing ala purify and valgrind.
Tests: Build Linux (Debian / Gentoo / Slackware / Mandrake / Fedora / Suse / Others) Windows (98 / 2000 / XP) MacOS (X) BSD (Free / Open) Other Unices (Solaris / Irix)
Note that an important condition for distros is what installation options were used. For example, a pure-base SuSE installation will not include developer tools like gcc. Most distros also have installation options which will end up lacking the -devel libs that Inkscape needs to compile. So each of these will require specification as to which installation options must be used during setup.
Notice : should be automated (example see : http://public.kitware.com/Dart/HTML/Index.shtml)
Interesting - have you used this system?
Internationalization Main tasks : - list the internationalization that are actively maintained - starting with http://www.inkscape.org/cgi-bin/wiki.pl?InkscapeTerminology, prepare a good dictionnary covering the maintained languages (can be very helpful for the reading of test procedures), plus the vocabulary of the application (messages, menus, etc etc)
- establish a list of users with the languages they can check and define
a kind of "test contract"
We may want to leave l10n separate, since the translation team seems to have things well organized currently.
UI
SVG compliance See in 'specifications of software' plus SVG specifications The main idea is to provide a large repository of reference "SVG files" and "editing sequences". then two types of actions : - open reference files with inkscape, export it as bitmap and compare with a reference bitmap provided by a reference application (Batik ?)
Yes, Batik
- open Inkscape, launch reference editing sequence, export result as
bitmap and compare with a reference bitmap provided by a reference application on the same sequence (Inkscape itself for non regression, Batik ?) Previous sequences can be combined in one in order to save time, but then analysis of results can become more complex.
Note, I developed a way using motion tracker software to calculate a "percent similarity" rating given an Inkscape-generated png against a Batik-generated png, that makes the analysis work easier. There's a glitch that'll need to be worked out, in that the original SVG's contain the description in the image, which throws off the comparison.
Performance "Speed benchmarking" (can be automated) - provide a list of criterias to be tested (time to launch, time to open complex docs, time to save complex docs, speed of scrolling, speed of application of different types of transformations)
After we have implemented extensions and language bindings, we'll be able to expand this category to include a huge amount of internal functionality. We should be prepared with this test plan for situations where someone reports, "It becomes aggrevatingly slow when I do this, this, and this" that we can script up those same actions and turn it into a performance test.
- provide a benchmark tool - establish a list of users with the architecture on which they use the
software, then define a kind of "test contract" defining the tools to be used and the periodicity of the tests
Also note that we may be able to get some mileage out of cross-compiler systems.
Verification of robustness - testing memory leaks (not a specialist of this : I need help)
This is an area where our work as testers is really cut out for us, because there is a ton of memory issues in the codebase, and we need to be able to categorize in a way that makes them accessible to the developers. Right now there's such an overwhelming number that developers wouldn't know where to start, so if in testing we can triage and prioritize the "top 10" or whatever, it would make the fixup work more tractable.
- provide a shaker tool (this tool would launch Inkscape and then start
a predefined number of pseudo-random commands -logging each action). The goal is to detect crashes and analyse their conditions with the logs
Another idea along these lines is to fire a variety of signals at it, and detect if it behaves correctly.
- testing against severe conditions (corrupted files / out of memory /
out of disk space ...)
Corrupted or invalid file testing is very important; we often receive bug reports for issues related to trying to open an unsupported file. If we can make the app more robust against these things, it'll help minimize the bug reports for this.
I also suspect that there may be significant performance issues in low-memory situations, that possibly could be valuable to test.
Again, this plan looks very good, I'm looking forward to this. I have a feeling that since there is a *huge* amount of improvement we can gain by taking a systematic approach to it like this. I think it'll also be very intriguing to figure out how to effectively meld systematized testing with open source practices, and to see how much can be accomplished in doing so.
Bryce
participants (2)
-
Bryce Harrington
-
Gazal, Geraud (MED)