I'm considering two different approaches of how to structure my acceptance tests. We have a Silverlight project that calls into a service layer (I own both sides). Because of the way Silverlight is, test code that calls Silverlight assemblies must be in a separate test project from the rest of the non-Silverlight tests.
1) Take all acceptance criteria that we come up with and put them in feature files. Label the scenarios with tags to specify the environment in which they will run (@server, @client, etc.). Include manual tests in the feature files too and label them with the @manual tag.
Pros: All of the tests that get written up by the BAs will be in one place for them to view and potentially edit
Cons: It might make more sense to test some of the scenarios with unit tests or integration tests and NUnit might be a better tool for that than SpecFlow
2) Write acceptance criteria for everything, but then automate some in SpecFlow, some with unit tests, some with integration tests, etc. Only the SpecFlow-automated scenarios will be in SpecFlow. We may put scenarios that will be unit tested, integration tested, or manual tested in the feature files, but those scenarios won't actually run any code, they will just be there for documentation purposes.
Pros: Less friction and overhead for the developers. We will automate different tests using the best tools that we have for each test.
Cons: We will have to keep the scenarios that aren't run by SpecFlow in sync with whatever code does automate them.
Thoughts? Is there another way that I'm not thinking of? How do you do it?