1
votes

I'm trying to clean up our functional suite at work and I was wondering if there is way to have cucumber repeat a scenario and see if passes before moving on to the next scenario in the feature? Phantom is my headless webkit browser poltergeist is my driver.

Basically our build keeps on failing because the box gets overwhelmed by all the test and during a scenario the page won't have enough time to render whatever it is we're trying to test. Therefore, this produces a false positive. I know of no way to anticipate what test will hang up the build.

What would be nice is to have a hook(one idea) that happens after each scenario. If the scenario passes then great print the results for that scenario and move on. However, if the scenario fails then try running it again just to make sure it isn't the build getting dizzy. Then and only then do you print the results for that scenario and move on to the next test.

Does anyone have any idea on how to implement that?

I'm thinking something like

 After do |scenario|
     if scenario.failed?
         result = scenario.run_again # I just made this function up I know for a fact this doesn't actually exist (see http://cukes.info/api/cucumber/ruby/yardoc/Cucumber/Ast/Scenario.html)
         if !result
            Cucumber.wants_to_quit = true
         end
     end
 end

The initial solution I saw for this was: How to rerun the failed scenarios using Cucumber?

This would be fine, but I would need to make sure that

 cucumber @rerun.txt

actually corrected the reports if the test passed. Like

 cucumber @rerun.txt --format junit --out foo.xml

Where foo.xml is the junit report that initially said that feature 1, 2 & 5 were passing while 3 and 4 were failing, but now will say 1, 2, 3, 4 & 5 are passing even though rerun.txt only said to rerun 3 and 4.

1
sounds like you have a load-capacity concern here to be truthful. If your test env gets overwhelmed by a few cucumber tests, what chance does the real environment have if multiple users happen to hit it at the same time. Maybe the focus should be on either a more robust test server that is more like production, or on addressing the issue of site performance under what is likely a pretty modest load (this is after all functional testing, not loadtesting, how many tests are running at the same time? )Chuck van der Linden
@mpdunson Did you happen to find a way to do it? I basically know the cucumber re-run command here, but that wouldnt work for me. The tests which I have are dependent on each other due to the way the application is designed. So all my tests fails, if the second test case fails due to some reason. I was looking for a way similar to yours, that if I can run that particular scenario until it passes before going to the next? instead of using an until or unless loop in the steps. Any idea on it?Emjey

1 Answers

3
votes

I use rerun extensively, and yes, it does output the correct features into the rerun.txt file. I have a cucumber.yml file that defines a bunch of "profiles". Note the rerun profile:

    <%
rerun = File.file?('rerun.txt') ? IO.read('rerun.txt') : ""
rerun_opts = rerun.to_s.strip.empty? ? "--format #{ENV['CUCUMBER_FORMAT'] || 'progress'} features" : "--format #{ENV['CUCUMBER_FORMAT'] || 'pretty'} #{rerun}"
%>

<% standart_opts = "--format html --out report.html --format rerun --out rerun.txt --no-source --format pretty --require features --tags ~@wip" %>
default: <%= standart_opts %> --no-source --format pretty --require features


rerun: <%= rerun_opts %> --format junit --out junit_format_rerun --format html --out rerun.html --format rerun --out rerun.txt --no-source --require features

core: <%= standart_opts %> --tags @core
jenkins: <%= standart_opts %> --tags @jenkins

So what happens here is that I run cucumber. During the initial run, it'll throw all the failed scenarios into the rerun.txt file. Then, after, I'll rerun only the failed tests with the following command:

cucumber -p rerun

The only downfall to this is that it requires an additional command (which you can automate, of course) and that it clutters up test metrics if you have them in place.