I am using SpecFlow to power an API integration test harness that will provide living documentation and test coverage of a new UI API. I have a few feature files written and have finally gotten to the point that I'm trying to run around 60 tests in parallel. However, while I can run the features individually with no issue I run into intermittent failures when trying to run them all in parallel using the Visual Studio 2019 test runner and the XUnit runner plugin.
Any given SpecFlow scenario will utilize steps from two different step binding classes, and each of those step binding classes may call for the injection of up to three context objects meant to capture state during the scenario and clean up the environment after the scenario completes. For example, a feature might look like this:
Scenario: Retrieving Message records returns data
Given I have created the following ClientAccounts:
| Index | SiteID | IsActive |
| 1 | 1 | 1 |
And I have created the following Logins:
| Index | IsActive |
| 1 | 1 |
And I have created the following Messages:
| MessageID | MessageText |
| 1 | Asdfasdf |
When I send an authentication request using the first Login and the IP Address 127.0.0.1
And I send a read request to the v1 Message endpoint for the first Message record created:
Then the first Message response should be equivalent to the following data for the first Message record created:
| MessageTest |
| Asdfasdf |
The first three steps belong to a class called DatabaseSteps whose constructor accepts an instance of a class DataUtility that facilitates CRUD operations to/from the database and keeps track of what records have been created as a part of the test execution. There are also some [StepArgumentTransformation] bindings that transform those tables into database objects that can be inserted into the db.
The fourth, fifth, and sixth steps belong to additional step classes that have constructors taking as dependencies both a DataUtility for db access and an ApiClientContext which stores session information as well as info about API responses that have previously been saved, in order to assert against the actual responses obtained during the "Then" stage. The DataUtility class implements IDisposable to simplify post-test cleanup.
Based on the documentation I expected that the context classes injected through the built-in DI container would be thread-safe. However, I found that both with DataUtility implementing IDisposable or by not declaring the interface and simply calling Dispose() directly from an [AfterScenario]
hook, tests having to do with expected returned data fail during most test runs. It's hard to say for sure because troubleshooting concurrency issues is awful, but it seems as though the DataUtility instance is being shared among scenarios and when any given scenario calls Dispose() on the data utility all of the test data I've scaffolded up is purged - even that in other unrelated scenarios. When I have neither IDisposable declared on the DataUtility class and nor call Dispose() from a hook, the tests execute without incident. Is there a particular way I need to set up an injected context class so that each scenario gets its own instance of that class?
Other Details: VS2019, SpecFlow 3, XUnit Test Runner
ProductDatabase
andSTSDatabase
that themselves contain both database provider classes (which execute CRUD operations in their respective databases) and various List<T> where T is a database object POCO that have been created so they can then be deleted when a scenario finishes. – Thomas Parikka