On 10 October 2013 19:27, Jasper van de Gronde <th.v.d.gronde@...528...> wrote:
It's not completely clear to me what you need exactly (just a change/no change answer, a number of failed/succeeded tests, ...?), but there is currently no separation between good and bad tests. In particular, a test could have (and some do have) both known good and bad references related to it.
Basically we need scripts that return a 0 exit code if all rendering tests succeed and non-zero if something goes wrong. In this context, a "pass" means that everything in all included rendering tests works and "fail" means that something in any of the included tests didn't work. In pseudo-code:
# Approximate behaviour of runtests.py...
num_errors = 0; # Number of errors found
for i in test_list: test_succeeded = run_rendering test(i);
if( !test_succeeded): num_errors++;
return num_errors;
Again, I'm working in the dark a bit, because I haven't tried running the tests yet so I might be talking bollocks!