![](https://secure.gravatar.com/avatar/82c0f6eed0ee59676eb45aadd66dac57.jpg?s=120&d=mm&r=g)
Dan Kegel wrote:
On Thu, Aug 27, 2009 at 10:03 AM, bulia byak<buliabyak@...400...> wrote:
On Thu, Aug 27, 2009 at 11:29 AM, Dan Kegel<dank@...2221...> wrote:
Why are there so many failures there? ...
Most of them are in SVG test suite, being the SVG features which we officially don't support. ...
The usual way to handle this is for the test harness to have a way of marking those tests as 'expected to fail', and not flag them as failures (unless they start passing unexpectedly). Then developers can tell very quickly if there's a problem -- only real regressions will show as errors.
I think a better solution would be to mark things that suddenly start to fail as regressions, as this would paint a more accurate picture. If you want you could relatively easily change the script to do so in SVN (if you do and notify me I'll update the page).
For the moment the script simply highlights outcomes that changed since the last time, this makes it a little easier to spot regressions (and improvements/fixes!).