1
votes

Any library in Robot Framework has two categories of keywords. The keywords to carry out a regular test step (e.g. Click Button) and the keywords that verify certain thing (e.g. Table Column Should Contain). The latter keywords typically have the word "Should" in them.

I assume that Robot Framework only puts PASS or FAIL status for the executed test cases in the report. How can I distinguish the FAIL test cases failed due to test step keywords versus failed due to test verification keywords?

For example, the calculator test case clicks 2, +, 2, = buttons and then verifies answer 4 as part of the last keyword (e.g. Should Be Equal As Numbers). If it fails while failing to click any button then I will consider it as "Failed to carry out its actual verification" (my result processing script will not log a bug here). However, if it fails while actually verifying the result then it is a valid bug associated with the test case (my result processing script can take action accordingly, like logging a bug).

If there are no techniques for generating the result file as per my requirement (PASS, FAIL and maybe FAIL_TO_VERIFY statuses), then I am seeking a technique to process the result or log xml to identify the kind of failure (FAIL vs FAIL_TO_VERIFY) for every FAIL test case.

PS: I have already figured out the bug logging part in my result processing script. So consider it as out of scope for the above question.

2
you question seems to be confusing can you please break your problem statement. whether you want to differentiate the way of logging the failure between application verification and step verification right and when you get any application or test verification failure you need to log a bug is that correct?MD5
Yes, that's correct. I will try to break the problem statement for better understandability.Amit Tendulkar

2 Answers

4
votes

The only thing robot provides in this regard is to give a unique error for keywords that failed during test setup. If your tests are designed such that you always do a bunch of setup followed by a set of verifications, this would do what you want.

However, in my experience most tests are not like that. Often, a test will have some setup, some verifications, and then more steps, and then more verifications. Best practices say not to write tests like that, but sometimes it is unavoidable (or at least inconvenient)

One possible workaround is to could create your own keyword called "verify" that works like "run keyword", but wraps the keyword in a try/catch block and then sets a tag or writes to the log or returns a custom error message.

you test might look like this:

*** Test cases ***
Example
    open browser  http://example.com  chrome
    click button  submit
    verify   page title should be  Hello, world
    verify   page should contain   Welcome, internet visitor!

The verify keyword would then run the keyword, and if an error occurs it catches it and then throws a new error like ""verification failed for "page title should be Hello, world": <actual error>

You could also set a tag like "verification-failure" on the test when this keyword fails. You would then get a nice statistic in the report showing how many tests have this tag (and thus, how many tests failed due to verification failures).

0
votes

Can you please check this keyword Register Keyword To Run On Failure in selenium2library , this keyword will allow to execute any other keyword when your slenium2library keywords fail. so you can call your bug reporting keyword here