![](https://secure.gravatar.com/avatar/b42d959359eb4a2004c3034c96eef404.jpg?s=120&d=mm&r=g)
28 Aug
2009
28 Aug
'09
1:49 p.m.
On Fri, Aug 28, 2009 at 12:51 AM, Jasper van de Gronde<th.v.d.gronde@...528...> wrote:
The usual way to handle this is for the test harness to have a way of marking those tests as 'expected to fail', and not flag them as failures (unless they start passing unexpectedly). Then developers can tell very quickly if there's a problem -- only real regressions will show as errors.
I think a better solution would be to mark things that suddenly start to fail as regressions
That's the same thing. The only difference is that you suggest generating the baseline automatically.
I wish I had time to help out, but I'm overcommitted just helping wine out... - Dan