
Hi all,
here is my workaround for the testing strategy of Inkscape. (sorry it's a copy paste of my doc in the mail, I can forward you the rtf document if you want)(easier to read) I'm currently listing the "specifications", in order to have a reference of features to be tested. The main things that should be defined if you agree with this : - establish a test process - establish communication/coordination tools (wiki / specific web page /mail list ?) - prepare/review specifications,test tools and test cases - begin tests
Regards,
Matiphas
Preliminary note: This is a preliminary/draft document for defining a test strategy for Inkscape projet (http://www.inkscape.org) It is currently in Rich Text Format (rtf), for easy exchange purposes. -> Don't know yet how to manage this document (Dookbook ? Open Office)(specific web page ? wiki ? cvs ?) -> Any way this document will split for the following reasons : - separate specifications of software, test strategy, test traceability, test items, test results... - ease of maintenance - separation of features under test At the moment, it is mainly architectured according to the categories of bug tracking under sourceforge tool for Inkscape (see http://sourceforge.net/tracker/?group_id=93438)
Reference: Inkscape testing framework: http://www.inkscape.org/cgi-bin/wiki.pl?TestingFramework (Missing) Inscape "specifications" (not really a pure specification of the software, this would be incompatible with the development process of Inscape, but somewhat a hierarchised and detailed list of features to be tested)
Introduction: Why testing ? Just to ensure quality of the software (does it suits user needs ?) and even better the quality of the code/design
What are the important concepts of testing : - measure: metrics (specifications and comparison to those specifications) are needed. This is important for the homogeneity of the tests and the correct interpretation of their results - coverage: what are the areas (design / code / features / architectures) covered by the tests. Stronly linked to the efficiency of the testing process - repeatability/reproducibility. The tests must provide coherent results, in order to to be correctly interpreted. Automatisation of the tests and well documented references (vocabulary, procedures...) help a lot. - periodicity: when shall the test be executed. This depends on the criticity, probability of bug and level of test.
What is required for having efficient tests : - KISS : tests cases should be very simple in order to be easy to understand, easy to implement, and the results easy to interpret. - well maintained and documented
Acronyms and abbreviations: (not used yet, but might become usefull) NA : Not Applicable TBD : To Be Done
Levels of tests: Unit testing: at code level, providing tools/script in order to test classes/functions... Mainly depends on coding team Feature testing: the biggest part, and the largest amount of work. After definition of the features to be tested, establish precise test cases and execute them. Can be partially automated (see in this document) Non regression testing: the ability to check nothing has been broken in the code after important/critical development. Can be covered by feature and unit testing, but a non regression analysis after important/critical development is a great help. Plus after finding some new bugs covered by the test plan, prepare new test itemand ensure it covers properly the new bug.
Reference tools: Purpose : define a list of reference tool against which the software should be compared. (especially for issues around import/export / rendering / printing / maybe UI) To be refined. At the moment : take a look at http://www.inkscape.org/cgi-bin/wiki.pl?OtherProjects SVG software Vector software
Tests: Build Linux (Debian / Gentoo / Slackware / Mandrake / Fedora / Suse / Others) Windows (98 / 2000 / XP) MacOS (X) BSD (Free / Open) Other Unices (Solaris / Irix)
Notice : should be automated (example see : http://public.kitware.com/Dart/HTML/Index.shtml) List of tasks : - establish a list of builders and link it with the architectures they can manage - establish a kind of "test contract" defining the tools to be used (compiler, dependancies, test report tool...) and the periodicity of the tests
Import Bitmap Vector
Export Bitmap Vector
Printing
Node editing
Shape editing
Transformations
Color Management
Internationalization Main tasks : - list the internationalization that are actively maintained - starting with http://www.inkscape.org/cgi-bin/wiki.pl?InkscapeTerminology, prepare a good dictionnary covering the maintained languages (can be very helpful for the reading of test procedures), plus the vocabulary of the application (messages, menus, etc etc) - establish a list of users with the languages they can check and define a kind of "test contract"
UI
SVG compliance See in 'specifications of software' plus SVG specifications The main idea is to provide a large repository of reference "SVG files" and "editing sequences". then two types of actions : - open reference files with inkscape, export it as bitmap and compare with a reference bitmap provided by a reference application (Batik ?) - open Inkscape, launch reference editing sequence, export result as bitmap and compare with a reference bitmap provided by a reference application on the same sequence (Inkscape itself for non regression, Batik ?) Previous sequences can be combined in one in order to save time, but then analysis of results can become more complex.
Performance "Speed benchmarking" (can be automated) - provide a list of criterias to be tested (time to launch, time to open complex docs, time to save complex docs, speed of scrolling, speed of application of different types of transformations) - provide a benchmark tool - establish a list of users with the architecture on which they use the software, then define a kind of "test contract" defining the tools to be used and the periodicity of the tests
Verification of robustness - testing memory leaks (not a specialist of this : I need help) - provide a shaker tool (this tool would launch Inkscape and then start a predefined number of pseudo-random commands -logging each action). The goal is to detect crashes and analyse their conditions with the logs - testing against severe conditions (corrupted files / out of memory / out of disk space ...)