Using BDD in our last project, we were eager to have our scenarios formulated before implementation starts (or at least write them in parallel to development) – all in the spirit of Test First.
Now the difference to a TDD approach is, that we didn’t write one Scenario at a time, then implemented this behavior, and only then moved onto writing the next one. Instead, we had many or all behavior tests for this user story written before implementation. We did this not just because of our Gherkin first approach, but also because different people were formulating scenarios than implementing them. Consequently, we had a lot of failing behavior tests before implementation started.
This led to the problem, that our CI build was always failing because of unbound steps or missing implementation. So, the developer couldn’t differentiate if they broke the build with something they implemented or if it was red because of unbound scenarios.
1. Tagging of single scenarios or whole features which still needed step definitions
Agree in your team on a specific tag (e.g. @BindingPlease). Then tag your scenarios or features in your feature files with it.
@BindingPlease Feature: feature title @BindingPlease Scenario: scenario title
2. Exclude Tests from CI build with said Tag so Tests with the Tag @BindingPlease wont be breaking the CI Build
In your Azure DevOps CI Build configure the Build Step “Test Assemblies – Visual Studio Test” and set the Option “Test filter criteria” to excluded a single tag.
We also used this approach for pushed scenarios which are not finished formulating yet or where we didn’t have the correct examples or results down yet (e.g. @WIP).
You exclude multiple tags in this way:
Make sure that the spelling of your tag in your feature files matches the spelling in your build configuration!
Tags are always transformed to TestCategories, so you can use this approach in every other CI tool. This is not limited to Azure DevOps.
3. Have a separate PR build where all tests are being executed.
With this we made sure we don’t accidentally merge a branch that still has unbound or failing tests. Our PR build needed to be green before a pull request could be approved. Setup a PR Build which is the same as the CI Build with the only difference, that you leave the “Test filter criteria” empty.
Why we didn’t just use the ignore Tag?
In one particular project of my past, we had quite a lot of ignored scenarios/tests. Some because we never meant to automate this particular scenario but still wanted to have the behavior documented, some, because the customer changed forth and back between different calculation logic, and at some point, the scenario with the other calculation logic, wasn’t deleted any more but simply ignored. But most of the ignored tests just happened over time probably with the good intent to fix this test later…
Now the problem with ignored tests is, that the only information you get is, that this test is skipped/ignored. You don’t know if it’s on purpose or because we have forgotten something. If we were lucky, there was a comment somewhere near the ignore tag but otherwise, they were simply ignored (and we had to invest a lot of time to find out if they were still relevant).
With the tag approach, we actually had the information of the tag directly with the test result in the test explorer e.g. DocumentationOnly or BindingPlease. In Addition to this information, we had the PR build telling us we didn’t forget any bindings, etc. Which if you ignore tests, they also would be skipped in your pipeline.
I hope you find our solution with tagging and the extra PR build helpful.
Did you run into similar problems in your project? If so, how did you solve it? Please let us know in the forum discussion.