21
votes

What's the best practice fort creating webdriver instances in Selenium-webdriver? Once per test method, per test class or per test run?

They seem to be rather (very!) expensive to spin up, but keeping it open between tests risks leaking information between test methods.

Or is there an alternative - is a single webdriver instance a single browser window (excluding popups), or is there a method for starting a new window/session from a given driver instance?

Thanks Matt

2

2 Answers

17
votes

I've found that reusing browser instances between test methods has been a huge time saver when using real browsers, e.g. Firefox. When running tests with HtmlUnitDriver, there is very little benefit.

Regarding the danger of indeterministic tests, it's a trade-off between totally deterministic tests and your time. Integration tests often involve trade-offs like these. If you want totally deterministic integration tests you should also be worrying about clearing the database/server state in between test runs.

One thing that you definitely should do if you are going to reuse browser instances is to clear/store the cookies between runs.

driver.manage().deleteAllCookies();

I do that in a tearDown() method. Also if your application stores any data on the client side you'd need to clear that (maybe via JavascriptExecutor). To your application which is under test, it should look like a completely unrelated request after doing this, which really minimizes the risk of indeterministic behaviour.

10
votes

If your goal of automated integration testing is to have reproducible tests, then I would recommend a new webdriver instance for every test execution.

Each test should stand alone, independent from any other test or side-effects.

Personally the only thing I find more frustrating than a hard to reproduce bug, is a non-deterministic test that you don't trust.

(This becomes even more crucial for managing the test data itself, particularly when you look at tests which can modify persistent application state, like CRUD operations.)

Yes, additional test execution time is costly, but it is better then spending the time debugging your tests.

Some possible solutions to help offset this penalty is to roll your testing directly into your build process, going beyond Continuous Build to Continuous Integration approach.

Also try to limit the scope of your integration tests. If you have a lot of heavy integration tests, eating up execution time, try to refactor. Instead, increase the coverage of your more lightweight unit tests of the underlying service calls (where your business logic is).