Hi folks,
I see a continuous build and test system has been discussed before, e.g. http://code.google.com/soc/2008/inkscape/appinfo.html?csaid=D8D99A1DF8E5D1F7 http://wiki.inkscape.org/wiki/index.php/TestSuite-blueprint http://kaeso.wordpress.com/2009/05/05/inkscape-047-tentative-release-plan-an... but I don't see any results page online. Has there been any movement on setting one up? As you know, it's a great way to avoid surprises; it's great to be able to catch problems early and just revert the commit that caused them.
Thanks, Dan
On Thu, Aug 27, 2009 at 11:17 AM, Dan Kegel<dank@...2221...> wrote:
Hi folks,
I see a continuous build and test system has been discussed before, e.g. http://code.google.com/soc/2008/inkscape/appinfo.html?csaid=D8D99A1DF8E5D1F7
Yes, this has been implemented, and has proven very useful, see gsoc-testsuite module in our SVN and the results at http://home.hccnet.nl/th.v.d.gronde/inkscape/ResultViewer.html. Unfortunately this is largely limited to testing rendering, which is important but only a part of Inkscape's functionality. More things can be automatically tested via --verb command line scripts, but this hasn't been done yet as far as I know (that is, --verbs work, but no one has written the actual tests and a harness to run them).
-----Original Message----- From: bulia byak [mailto:buliabyak@...400...] Sent: Thursday, August 27, 2009 16:38 To: Dan Kegel Cc: inkscape-devel@lists.sourceforge.net Subject: Re: [Inkscape-devel] Build/test bot for inkscape?
On Thu, Aug 27, 2009 at 11:17 AM, Dan Kegel<dank@...2221...> wrote:
Hi folks,
I see a continuous build and test system has been discussed
before, e.g.
http://code.google.com/soc/2008/inkscape/appinfo.html?csaid=D8D99A1DF8
E5D1F7
Yes, this has been implemented, and has proven very useful, see gsoc-testsuite module in our SVN and the results at http://home.hccnet.nl/th.v.d.gronde/inkscape/ResultViewer.html. Unfortunately this is largely limited to testing rendering, which is important but only a part of Inkscape's functionality. More things can be automatically tested via --verb command line scripts, but this hasn't been done yet as far as I know (that is, --verbs work, but no one has written the actual tests and a harness to run them).
I think we first should try to get the tests run regularly and automatically. Now we have to rely on the awesomeness of Jasper, which is great but probably not endless ;)
Johan
On Thu, Aug 27, 2009 at 7:37 AM, bulia byak<buliabyak@...400...> wrote:
On Thu, Aug 27, 2009 at 11:17 AM, Dan Kegel<dank@...2221...> wrote:
I see a continuous build and test system has been discussed before, e.g. http://code.google.com/soc/2008/inkscape/appinfo.html?csaid=D8D99A1DF8E5D1F7
Yes, this has been implemented, and has proven very useful, see gsoc-testsuite module in our SVN and the results at http://home.hccnet.nl/th.v.d.gronde/inkscape/ResultViewer.html.
Thanks, I've added it to my little list at http://wiki.winehq.org/UnitTestSuites
Unfortunately this is largely limited to testing rendering, which is important but only a part of Inkscape's functionality. More things can be automatically tested via --verb command line scripts, but this hasn't been done yet as far as I know (that is, --verbs work, but no one has written the actual tests and a harness to run them).
Hopefully someone will get around to that soon, it's worth it. - Dan
On Thu, Aug 27, 2009 at 7:37 AM, bulia byak<buliabyak@...400...> wrote:
Yes, this has been implemented, and has proven very useful, see gsoc-testsuite module in our SVN and the results at http://home.hccnet.nl/th.v.d.gronde/inkscape/ResultViewer.html.
Why are there so many failures there? It looks like nobody is paying attention to the results. Shouldn't those regressions stop development until they're fixed?
On Thu, Aug 27, 2009 at 11:29 AM, Dan Kegel<dank@...2221...> wrote:
Why are there so many failures there? It looks like nobody is paying attention to the results. Shouldn't those regressions stop development until they're fixed?
Most of them are in SVG test suite, being the SVG features which we officially don't support. While this is regrettable, and is being slowly improved, it's not something that would affect regular users of Inkscape - since Inkscape has no UI for creating these SVG features, its inability to render them matters little.
On Thu, Aug 27, 2009 at 10:03 AM, bulia byak<buliabyak@...400...> wrote:
On Thu, Aug 27, 2009 at 11:29 AM, Dan Kegel<dank@...2221...> wrote:
Why are there so many failures there? It looks like nobody is paying attention to the results. Shouldn't those regressions stop development until they're fixed?
Most of them are in SVG test suite, being the SVG features which we officially don't support. While this is regrettable, and is being slowly improved, it's not something that would affect regular users of Inkscape - since Inkscape has no UI for creating these SVG features, its inability to render them matters little.
The usual way to handle this is for the test harness to have a way of marking those tests as 'expected to fail', and not flag them as failures (unless they start passing unexpectedly). Then developers can tell very quickly if there's a problem -- only real regressions will show as errors. - Dan
Dan Kegel wrote:
On Thu, Aug 27, 2009 at 10:03 AM, bulia byak<buliabyak@...400...> wrote:
On Thu, Aug 27, 2009 at 11:29 AM, Dan Kegel<dank@...2221...> wrote:
Why are there so many failures there? ...
Most of them are in SVG test suite, being the SVG features which we officially don't support. ...
The usual way to handle this is for the test harness to have a way of marking those tests as 'expected to fail', and not flag them as failures (unless they start passing unexpectedly). Then developers can tell very quickly if there's a problem -- only real regressions will show as errors.
I think a better solution would be to mark things that suddenly start to fail as regressions, as this would paint a more accurate picture. If you want you could relatively easily change the script to do so in SVN (if you do and notify me I'll update the page).
For the moment the script simply highlights outcomes that changed since the last time, this makes it a little easier to spot regressions (and improvements/fixes!).
On Fri, Aug 28, 2009 at 12:51 AM, Jasper van de Gronde<th.v.d.gronde@...528...> wrote:
The usual way to handle this is for the test harness to have a way of marking those tests as 'expected to fail', and not flag them as failures (unless they start passing unexpectedly). Then developers can tell very quickly if there's a problem -- only real regressions will show as errors.
I think a better solution would be to mark things that suddenly start to fail as regressions
That's the same thing. The only difference is that you suggest generating the baseline automatically.
I wish I had time to help out, but I'm overcommitted just helping wine out... - Dan
participants (4)
-
unknown@example.com
-
bulia byak
-
Dan Kegel
-
Jasper van de Gronde