Handling flaky tests

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

Handling flaky tests

Ivan ✪
Hi all,

I have a bunch of Selenium tests that may be sensitive to failures because some of them are just flaky.
To avoid these failures, I implemented RetryAnalyzer to retry failing tests up to 3 executions.

Now, I'd like these tests (that at some point pass) to be reported as 'unstable' or somehow differentiate them from the actual failures.

I tried implementing some listeners like ISuite or the retry analyzer itself and use the SUCCESS_PERCENTAGE_FAILURE status for these, but still the testng report shows only passed or failing tests.

Does anyone have a better approach?


Thanks!

Iván.-

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.
Reply | Threaded
Open this post in threaded view
|

Re: Handling flaky tests

François Reynaud
I would create my own report implementing IReporter.


2011/5/26 Ivan ✪ <[hidden email]>
Hi all,

I have a bunch of Selenium tests that may be sensitive to failures because some of them are just flaky.
To avoid these failures, I implemented RetryAnalyzer to retry failing tests up to 3 executions.

Now, I'd like these tests (that at some point pass) to be reported as 'unstable' or somehow differentiate them from the actual failures.

I tried implementing some listeners like ISuite or the retry analyzer itself and use the SUCCESS_PERCENTAGE_FAILURE status for these, but still the testng report shows only passed or failing tests.

Does anyone have a better approach?


Thanks!

Iván.-

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.
Reply | Threaded
Open this post in threaded view
|

Re: Handling flaky tests

Ivan ✪
well, it's not just about the report. I don't want my build to fail if only some tests failed but passed on their retry. I just want testng to treat my tests as passed/failed given this 'unstable' condition.


2011/5/26 François Reynaud <[hidden email]>
I would create my own report implementing IReporter.


2011/5/26 Ivan ✪ <[hidden email]>
Hi all,

I have a bunch of Selenium tests that may be sensitive to failures because some of them are just flaky.
To avoid these failures, I implemented RetryAnalyzer to retry failing tests up to 3 executions.

Now, I'd like these tests (that at some point pass) to be reported as 'unstable' or somehow differentiate them from the actual failures.

I tried implementing some listeners like ISuite or the retry analyzer itself and use the SUCCESS_PERCENTAGE_FAILURE status for these, but still the testng report shows only passed or failing tests.

Does anyone have a better approach?


Thanks!

Iván.-

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.
Reply | Threaded
Open this post in threaded view
|

Re: Handling flaky tests

Jeff-351
I really would like something similar as well.  Really what we are talking about is a non-fatal error or warning or ... whatever, that can flag the test cases in the report as possibly needing attention, but not halt or affect test execution.
 
When I run my WebUI tests in Selenium, we are often checking HTML content (does element have right class, is right translation on the label).  If this fails, it won't affect functionality, but needs to be flagged as something that needs to be addressed.
 
I'm looking at ways to work around it.  I have some assertXXX() methods, but I also have related expectsXXXX() methods that does the check, then caches any errors to be dumped at a later time.  What I don't yet know how to do it tie the failures to the test method without failing the method directly.
 
2011/5/26 Ivan ✪ <[hidden email]>
well, it's not just about the report. I don't want my build to fail if only some tests failed but passed on their retry. I just want testng to treat my tests as passed/failed given this 'unstable' condition.


2011/5/26 François Reynaud <[hidden email]>
I would create my own report implementing IReporter.


2011/5/26 Ivan ✪ <[hidden email]>
Hi all,

I have a bunch of Selenium tests that may be sensitive to failures because some of them are just flaky.
To avoid these failures, I implemented RetryAnalyzer to retry failing tests up to 3 executions.

Now, I'd like these tests (that at some point pass) to be reported as 'unstable' or somehow differentiate them from the actual failures.

I tried implementing some listeners like ISuite or the retry analyzer itself and use the SUCCESS_PERCENTAGE_FAILURE status for these, but still the testng report shows only passed or failing tests.

Does anyone have a better approach?


Thanks!

Iván.-

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.



--
Jeff Vincent
[hidden email]
See my LinkedIn profile at:
http://www.linkedin.com/in/rjeffreyvincent
I ♥ DropBox !! 

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.
Reply | Threaded
Open this post in threaded view
|

Re: Handling flaky tests

Cédric Beust ♔-2
There are a couple of reasons why this can't be done currently:
  1. You are discussing the possibility to add new test statuses to TestNG. Right now, there are only three (four actually but let's forget the last one): PASSED, FAILED, SKIPPED. The ability for users to add their own statuses wouldn't be very difficult to implement but I'm not sure how useful it would be since TestNG wouldn't know how to treat those. Should these new statuses be allowed to fit in one of these three categories? What if they belong to a totally brand new category, how should TestNG treat it?

  2. TestNG listeners are meant to be just that: listeners. The behavior is undefined if you modify the test context, especially if a test has status X but you change that status to Y in the listener. Again, it's not impossible to fix that, I just didn't implement the listeners with this possibility in mind, so that behavior is undefined right now.
Overall, I'm more open to discussing 2) than 1), which seems to be only marginally useful. And if we end up identifying a new status that can be useful, I'd rather add official support for it in TestNG than just opening the door to this customization. I think it's more likely that users might at some point need a new status X than they might need to be able to add any arbitrary test status.

-- 
Cédric




On Thu, May 26, 2011 at 9:42 AM, Jeff <[hidden email]> wrote:
I really would like something similar as well.  Really what we are talking about is a non-fatal error or warning or ... whatever, that can flag the test cases in the report as possibly needing attention, but not halt or affect test execution.
 
When I run my WebUI tests in Selenium, we are often checking HTML content (does element have right class, is right translation on the label).  If this fails, it won't affect functionality, but needs to be flagged as something that needs to be addressed.
 
I'm looking at ways to work around it.  I have some assertXXX() methods, but I also have related expectsXXXX() methods that does the check, then caches any errors to be dumped at a later time.  What I don't yet know how to do it tie the failures to the test method without failing the method directly.
 
2011/5/26 Ivan ✪ <[hidden email]>
well, it's not just about the report. I don't want my build to fail if only some tests failed but passed on their retry. I just want testng to treat my tests as passed/failed given this 'unstable' condition.


2011/5/26 François Reynaud <[hidden email]>
I would create my own report implementing IReporter.


2011/5/26 Ivan ✪ <[hidden email]>
Hi all,

I have a bunch of Selenium tests that may be sensitive to failures because some of them are just flaky.
To avoid these failures, I implemented RetryAnalyzer to retry failing tests up to 3 executions.

Now, I'd like these tests (that at some point pass) to be reported as 'unstable' or somehow differentiate them from the actual failures.

I tried implementing some listeners like ISuite or the retry analyzer itself and use the SUCCESS_PERCENTAGE_FAILURE status for these, but still the testng report shows only passed or failing tests.

Does anyone have a better approach?


Thanks!

Iván.-

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.



--
Jeff Vincent
[hidden email]
See my LinkedIn profile at:
http://www.linkedin.com/in/rjeffreyvincent
I ♥ DropBox !! 

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.
Reply | Threaded
Open this post in threaded view
|

Re: Handling flaky tests

Jeff-351
I really like the idea of a new, fully supported TestNG status like 'warn'. Thus 'pass', 'fail', 'warn', 'skip'. 
 
2011/5/26 Cédric Beust ♔ <[hidden email]>
There are a couple of reasons why this can't be done currently:
  1. You are discussing the possibility to add new test statuses to TestNG. Right now, there are only three (four actually but let's forget the last one): PASSED, FAILED, SKIPPED. The ability for users to add their own statuses wouldn't be very difficult to implement but I'm not sure how useful it would be since TestNG wouldn't know how to treat those. Should these new statuses be allowed to fit in one of these three categories? What if they belong to a totally brand new category, how should TestNG treat it?

  2. TestNG listeners are meant to be just that: listeners. The behavior is undefined if you modify the test context, especially if a test has status X but you change that status to Y in the listener. Again, it's not impossible to fix that, I just didn't implement the listeners with this possibility in mind, so that behavior is undefined right now.
Overall, I'm more open to discussing 2) than 1), which seems to be only marginally useful. And if we end up identifying a new status that can be useful, I'd rather add official support for it in TestNG than just opening the door to this customization. I think it's more likely that users might at some point need a new status X than they might need to be able to add any arbitrary test status.

-- 
Cédric




On Thu, May 26, 2011 at 9:42 AM, Jeff <[hidden email]> wrote:
I really would like something similar as well.  Really what we are talking about is a non-fatal error or warning or ... whatever, that can flag the test cases in the report as possibly needing attention, but not halt or affect test execution.
 
When I run my WebUI tests in Selenium, we are often checking HTML content (does element have right class, is right translation on the label).  If this fails, it won't affect functionality, but needs to be flagged as something that needs to be addressed.
 
I'm looking at ways to work around it.  I have some assertXXX() methods, but I also have related expectsXXXX() methods that does the check, then caches any errors to be dumped at a later time.  What I don't yet know how to do it tie the failures to the test method without failing the method directly.
 
2011/5/26 Ivan ✪ <[hidden email]>
well, it's not just about the report. I don't want my build to fail if only some tests failed but passed on their retry. I just want testng to treat my tests as passed/failed given this 'unstable' condition.


2011/5/26 François Reynaud <[hidden email]>
I would create my own report implementing IReporter.


2011/5/26 Ivan ✪ <[hidden email]>
Hi all,

I have a bunch of Selenium tests that may be sensitive to failures because some of them are just flaky.
To avoid these failures, I implemented RetryAnalyzer to retry failing tests up to 3 executions.

Now, I'd like these tests (that at some point pass) to be reported as 'unstable' or somehow differentiate them from the actual failures.

I tried implementing some listeners like ISuite or the retry analyzer itself and use the SUCCESS_PERCENTAGE_FAILURE status for these, but still the testng report shows only passed or failing tests.

Does anyone have a better approach?


Thanks!

Iván.-

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.



--
Jeff Vincent
[hidden email]
See my LinkedIn profile at:
http://www.linkedin.com/in/rjeffreyvincent
I ♥ DropBox !! 

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.



--
Jeff Vincent
[hidden email]
See my LinkedIn profile at:
http://www.linkedin.com/in/rjeffreyvincent
I ♥ DropBox !! 

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.
Reply | Threaded
Open this post in threaded view
|

Re: Handling flaky tests

François Reynaud
also it's important to figure out why the tests are flaky.
A web site kind of working should be a fail.
If you think this is a selenium problem, please log in on selenium irc. 

On Thu, May 26, 2011 at 5:59 PM, Jeff <[hidden email]> wrote:
I really like the idea of a new, fully supported TestNG status like 'warn'. Thus 'pass', 'fail', 'warn', 'skip'. 
 
2011/5/26 Cédric Beust ♔ <[hidden email]>
There are a couple of reasons why this can't be done currently:
  1. You are discussing the possibility to add new test statuses to TestNG. Right now, there are only three (four actually but let's forget the last one): PASSED, FAILED, SKIPPED. The ability for users to add their own statuses wouldn't be very difficult to implement but I'm not sure how useful it would be since TestNG wouldn't know how to treat those. Should these new statuses be allowed to fit in one of these three categories? What if they belong to a totally brand new category, how should TestNG treat it?

  2. TestNG listeners are meant to be just that: listeners. The behavior is undefined if you modify the test context, especially if a test has status X but you change that status to Y in the listener. Again, it's not impossible to fix that, I just didn't implement the listeners with this possibility in mind, so that behavior is undefined right now.
Overall, I'm more open to discussing 2) than 1), which seems to be only marginally useful. And if we end up identifying a new status that can be useful, I'd rather add official support for it in TestNG than just opening the door to this customization. I think it's more likely that users might at some point need a new status X than they might need to be able to add any arbitrary test status.

-- 
Cédric




On Thu, May 26, 2011 at 9:42 AM, Jeff <[hidden email]> wrote:
I really would like something similar as well.  Really what we are talking about is a non-fatal error or warning or ... whatever, that can flag the test cases in the report as possibly needing attention, but not halt or affect test execution.
 
When I run my WebUI tests in Selenium, we are often checking HTML content (does element have right class, is right translation on the label).  If this fails, it won't affect functionality, but needs to be flagged as something that needs to be addressed.
 
I'm looking at ways to work around it.  I have some assertXXX() methods, but I also have related expectsXXXX() methods that does the check, then caches any errors to be dumped at a later time.  What I don't yet know how to do it tie the failures to the test method without failing the method directly.
 
2011/5/26 Ivan ✪ <[hidden email]>
well, it's not just about the report. I don't want my build to fail if only some tests failed but passed on their retry. I just want testng to treat my tests as passed/failed given this 'unstable' condition.


2011/5/26 François Reynaud <[hidden email]>
I would create my own report implementing IReporter.


2011/5/26 Ivan ✪ <[hidden email]>
Hi all,

I have a bunch of Selenium tests that may be sensitive to failures because some of them are just flaky.
To avoid these failures, I implemented RetryAnalyzer to retry failing tests up to 3 executions.

Now, I'd like these tests (that at some point pass) to be reported as 'unstable' or somehow differentiate them from the actual failures.

I tried implementing some listeners like ISuite or the retry analyzer itself and use the SUCCESS_PERCENTAGE_FAILURE status for these, but still the testng report shows only passed or failing tests.

Does anyone have a better approach?


Thanks!

Iván.-

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.



--
Jeff Vincent
[hidden email]
See my LinkedIn profile at:
http://www.linkedin.com/in/rjeffreyvincent
I ♥ DropBox !! 

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.



--
Jeff Vincent
[hidden email]
See my LinkedIn profile at:
http://www.linkedin.com/in/rjeffreyvincent
I ♥ DropBox !! 

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.
Reply | Threaded
Open this post in threaded view
|

Re: Handling flaky tests

Marc Guillemot
In reply to this post by Cédric Beust ♔-2
Hi,

to (1): for me it would be interesting to have new statuses. "Tests with
warnings" as proposed by Jeff sounds interesting. An other status would
be very interesting for me: "not yet implemented". This would be for
tests that are expected to fail (because the tested feature is not yet
implemented) but that should run without breaking the build to get
information when the implementation is done. A special information in
the reports for this kind of tests would be great.

Cheers,
Marc.
--
HtmlUnit support & consulting from the source
Blog: http://mguillem.wordpress.com


Le 26/05/2011 18:51, Cédric Beust ♔ a écrit :

> There are a couple of reasons why this can't be done currently:
>
>    1. You are discussing the possibility to add new test statuses to
>       TestNG. Right now, there are only three (four actually but let's
>       forget the last one): PASSED, FAILED, SKIPPED. The ability for
>       users to add their own statuses wouldn't be very difficult to
>       implement but I'm not sure how useful it would be since TestNG
>       wouldn't know how to treat those. Should these new statuses be
>       allowed to fit in one of these three categories? What if they
>       belong to a totally brand new category, how should TestNG treat it?
>
>    2. TestNG listeners are meant to be just that: listeners. The
>       behavior is undefined if you modify the test context, especially
>       if a test has status X but you change that status to Y in the
>       listener. Again, it's not impossible to fix that, I just didn't
>       implement the listeners with this possibility in mind, so that
>       behavior is undefined right now.
>
> Overall, I'm more open to discussing 2) than 1), which seems to be only
> marginally useful. And if we end up identifying a new status that can be
> useful, I'd rather add official support for it in TestNG than just
> opening the door to this customization. I think it's more likely that
> users might at some point need a new status X than they might need to be
> able to add any arbitrary test status.
>
> --
> Cédric
>
>
>
>
> On Thu, May 26, 2011 at 9:42 AM, Jeff <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     I really would like something similar as well.  Really what we are
>     talking about is a non-fatal error or warning or ... whatever, that
>     can flag the test cases in the report as possibly needing
>     attention, but not halt or affect test execution.
>     When I run my WebUI tests in Selenium, we are often checking HTML
>     content (does element have right class, is right translation on the
>     label).  If this fails, it won't affect functionality, but needs to
>     be flagged as something that needs to be addressed.
>     I'm looking at ways to work around it.  I have some assertXXX()
>     methods, but I also have related expectsXXXX() methods that does the
>     check, then caches any errors to be dumped at a later time.  What I
>     don't yet know how to do it tie the failures to the test method
>     without failing the method directly.
>     2011/5/26 Ivan ✪ <[hidden email]
>     <mailto:[hidden email]>>
>
>         well, it's not just about the report. I don't want my build to
>         fail if only some tests failed but passed on their retry. I just
>         want testng to treat my tests as passed/failed given this
>         'unstable' condition.
>
>
>         2011/5/26 François Reynaud <[hidden email]
>         <mailto:[hidden email]>>
>
>             I would create my own report implementing IReporter.
>
>
>             2011/5/26 Ivan ✪ <[hidden email]
>             <mailto:[hidden email]>>
>
>                 Hi all,
>
>                 I have a bunch of Selenium tests that may be sensitive
>                 to failures because some of them are just flaky.
>                 To avoid these failures, I implemented RetryAnalyzer to
>                 retry failing tests up to 3 executions.
>
>                 Now, I'd like these tests (that at some point pass) to
>                 be reported as 'unstable' or somehow differentiate them
>                 from the actual failures.
>
>                 I tried implementing some listeners like ISuite or the
>                 retry analyzer itself and use the
>                 SUCCESS_PERCENTAGE_FAILURE status for these, but still
>                 the testng report shows only passed or failing tests.
>
>                 Does anyone have a better approach?
>
>
>                 Thanks!
>
>                 Iván.-
>
>
>     --
>     Jeff Vincent
>     [hidden email] <mailto:[hidden email]>
>     See my LinkedIn profile at:
>     http://www.linkedin.com/in/rjeffreyvincent
>     I ♥ DropBox <http://db.tt/9O6LfBX> !!
>

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

Reply | Threaded
Open this post in threaded view
|

Re: Handling flaky tests

François Reynaud
for "not yet implemented" I use what Cedric described in his book I think.
tag the tests with group="broken" . I add that tag for the tests not yet implemented, or the ones I know fail because of a bug.
Hudson exclude broken when it runs, so my result isn't poluted with failures.

At the end of the testing cycle, i do a run of the broken group and check it's empty.

thanks,
François

On Fri, May 27, 2011 at 8:13 AM, Marc Guillemot <[hidden email]> wrote:
Hi,

to (1): for me it would be interesting to have new statuses. "Tests with warnings" as proposed by Jeff sounds interesting. An other status would be very interesting for me: "not yet implemented". This would be for tests that are expected to fail (because the tested feature is not yet implemented) but that should run without breaking the build to get information when the implementation is done. A special information in the reports for this kind of tests would be great.

Cheers,
Marc.
--
HtmlUnit support & consulting from the source
Blog: http://mguillem.wordpress.com


Le 26/05/2011 18:51, Cédric Beust ♔ a écrit :
There are a couple of reasons why this can't be done currently:

  1. You are discussing the possibility to add new test statuses to

     TestNG. Right now, there are only three (four actually but let's
     forget the last one): PASSED, FAILED, SKIPPED. The ability for
     users to add their own statuses wouldn't be very difficult to
     implement but I'm not sure how useful it would be since TestNG
     wouldn't know how to treat those. Should these new statuses be
     allowed to fit in one of these three categories? What if they
     belong to a totally brand new category, how should TestNG treat it?

  2. TestNG listeners are meant to be just that: listeners. The

     behavior is undefined if you modify the test context, especially
     if a test has status X but you change that status to Y in the
     listener. Again, it's not impossible to fix that, I just didn't
     implement the listeners with this possibility in mind, so that
     behavior is undefined right now.

Overall, I'm more open to discussing 2) than 1), which seems to be only
marginally useful. And if we end up identifying a new status that can be
useful, I'd rather add official support for it in TestNG than just
opening the door to this customization. I think it's more likely that
users might at some point need a new status X than they might need to be
able to add any arbitrary test status.

--
Cédric




On Thu, May 26, 2011 at 9:42 AM, Jeff <[hidden email]
<mailto:[hidden email]>> wrote:

   I really would like something similar as well.  Really what we are
   talking about is a non-fatal error or warning or ... whatever, that
   can flag the test cases in the report as possibly needing
   attention, but not halt or affect test execution.
   When I run my WebUI tests in Selenium, we are often checking HTML
   content (does element have right class, is right translation on the
   label).  If this fails, it won't affect functionality, but needs to
   be flagged as something that needs to be addressed.
   I'm looking at ways to work around it.  I have some assertXXX()
   methods, but I also have related expectsXXXX() methods that does the
   check, then caches any errors to be dumped at a later time.  What I
   don't yet know how to do it tie the failures to the test method
   without failing the method directly.
   2011/5/26 Ivan ✪ <[hidden email]
   <mailto:[hidden email]>>


       well, it's not just about the report. I don't want my build to
       fail if only some tests failed but passed on their retry. I just
       want testng to treat my tests as passed/failed given this
       'unstable' condition.


       2011/5/26 François Reynaud <[hidden email]
       <mailto:[hidden email]>>


           I would create my own report implementing IReporter.


           2011/5/26 Ivan ✪ <[hidden email]
           <mailto:[hidden email]>>


               Hi all,

               I have a bunch of Selenium tests that may be sensitive
               to failures because some of them are just flaky.
               To avoid these failures, I implemented RetryAnalyzer to
               retry failing tests up to 3 executions.

               Now, I'd like these tests (that at some point pass) to
               be reported as 'unstable' or somehow differentiate them
               from the actual failures.

               I tried implementing some listeners like ISuite or the
               retry analyzer itself and use the
               SUCCESS_PERCENTAGE_FAILURE status for these, but still
               the testng report shows only passed or failing tests.

               Does anyone have a better approach?


               Thanks!

               Iván.-


   --
   Jeff Vincent
   [hidden email] <mailto:[hidden email]>

   See my LinkedIn profile at:
   http://www.linkedin.com/in/rjeffreyvincent
   I ♥ DropBox <http://db.tt/9O6LfBX> !!


--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.


--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.
Reply | Threaded
Open this post in threaded view
|

Re: Handling flaky tests

Marc Guillemot
Interesting but I prefer to run them together with the other tests. With
JUnit I use a custom runner for this purpose but it would be nice if it
could be directly supported in TestNG.

Cheers,
Marc.
--
HtmlUnit support & consulting from the source
Blog: http://mguillem.wordpress.com


Le 27/05/2011 10:17, François Reynaud a écrit :

> for "not yet implemented" I use what Cedric described in his book I think.
> tag the tests with group="broken" . I add that tag for the tests not yet
> implemented, or the ones I know fail because of a bug.
> Hudson exclude broken when it runs, so my result isn't poluted with
> failures.
>
> At the end of the testing cycle, i do a run of the broken group and
> check it's empty.
>
> thanks,
> François

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.

Reply | Threaded
Open this post in threaded view
|

Re: Handling flaky tests

François Reynaud
I'm running selenium tests that are running against a slow environment.Running tests that I know will fail just waste the resources + makes the report analysis harder for QA. As soon as a bug has been identified and logged, I don't see the point running the test again until the bug is fixed.
Are you running everything all the time to minimize config changes or something else ?

On Fri, May 27, 2011 at 9:37 AM, Marc Guillemot <[hidden email]> wrote:
Interesting but I prefer to run them together with the other tests. With JUnit I use a custom runner for this purpose but it would be nice if it could be directly supported in TestNG.


Cheers,
Marc.
--
HtmlUnit support & consulting from the source
Blog: http://mguillem.wordpress.com


Le 27/05/2011 10:17, François Reynaud a écrit :

for "not yet implemented" I use what Cedric described in his book I think.
tag the tests with group="broken" . I add that tag for the tests not yet
implemented, or the ones I know fail because of a bug.
Hudson exclude broken when it runs, so my result isn't poluted with
failures.

At the end of the testing cycle, i do a run of the broken group and
check it's empty.

thanks,
François

--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.


--
You received this message because you are subscribed to the Google Groups "testng-users" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/testng-users?hl=en.