knownfail or skip for flakey tests?

Jelmer Vernooij jelmer at
Mon Dec 5 16:51:42 MST 2011

Am 14/10/11 08:14, schrieb Stefan (metze) Metzmacher:
> Hi,
>>>> after discussing with Metze, I marked the
>>>> samba4.drs.delete_object.python test knownfail.
>>>> It seems to be flakey:
>>> If the test is flaky, it should probably be skipped. having a test in
>>> knownfail means that it will be reported as a failure if the test
>>> actually succeeds.
>> Jelmer,
>> I know this is meant to be the case, but I'm pretty sure it isn't how it
>> is working for now.  We have been putting flaky tests in knownfail for
>> quite some time now, which allows us to tell the difference between
>> 'fails' and 'segfaults'.
> Sadly currently knownfail just means that we ignore failures,
> but we don't turn unexpected success into an error.
> We used to do that a few years ago, before we got
> selftest/format-subunit and
> selftest/filter-subunit as external processes.
> I think we should try to fix that and add an additional
> handling for flakey tests. We could then maintain
> flakey-failures and flakey-errors files and also ignore errors
> if needed instead of only failures.
This has now been fixed in master. Tests listed in selftest/knownfail 
that actually succeed will now once again trigger a uxsuccess 
(unexpected success), which is considered an error.

I've moved those tests in knownfail that had a comment saying they were 
flapping to selftest/flapping. Several tests could also be removed from 
knownfail because they are no longer failing (ie because remote calls 
have been implemented, or bugs fixed). If you run into any more tests 
that are flapping, please add them there as well.



More information about the samba-technical mailing list