This week’s challenge is solving a problem I’ve often seen when manual tests get rewritten as Given-When-Then:
We have a long feature file. The scenarios build on each other, so the result of one scenario is effectively the starting point for another. This made it easy to write tests, but it’s making it difficult to maintain them. Someone updating a single scenario at the top of the file can break a lot of tests later in the file. How would you fix this?
When working with a system that has limited automation support, or performing actions that are very slow, it’s quite common (but unfortunate) to specify a feature through a chain of scenarios. Each scenario sets up the stage for the next one, expecting the context to continue from the preceding test executions. Some sections will even imply the “Given” part. For example:
Scenario: (1) register unique user Given no users are already registered When a user registers with username "mike99" Then the registration succeeds Scenario: (2) create order requires payment method Given the user adds "Stories that stick" to the shopping cart When the user checks out Then the order is "pending" with message "Payment method required" Scenario: (3) register payment method Given user adds a payment method "Visa" with card number "4242424242424242" When the user checks out the orders page Then the order is "pending" with message "Processing payment" Scenario: (4) check out with existing payment method Given the user adds "Stories that stick" to the shopping cart When the user checks out Then the order is "pending" with message "Processing payment" Scenario: (5) reject duplicated username When a user registers with username "mike99" Then the registration fails with "Duplicated username"
There are three downsides of this type of structure.
The first is that the readers have to remember the context. For example, scenarios 2 and 4 have the same precondition and trigger, but a different outcome. In this particular case, as there is only one scenario between them, it’s relatively easy to guess why they are different. As the list grows, that becomes significantly more difficult.
The second is that updating scenarios becomes quite tricky. If, for some reason, we wanted to change the username in the first scenario, the last scenario also needs to change. Otherwise, the test based on that scenario will start to fail mysteriously.
The third is that problems in executing tests usually propagate down the chain. If a test based on the third scenario fails because of a bug, the fourth scenario test will likely fail as well, since the context is no longer correct. This can lead to misleading test reports and complicated troubleshooting. Verifying a fix also becomes more difficult, since it’s not possible to just execute the third scenario in isolation – we have to run the first two as well.
To fix the downsides, we also need to consider the reasons why people keep writing such specifications. Because each scenario extends the previous one, it’s quite easy to write the initial version of such a file. Also, as the steps from the previous scenarios do not need to be repeated to set up the context, the overall execution is quicker than if each scenario were to set up everything it needs from scratch.If you’d like to send a longer comment, perhaps write a blog post and then just send us the URL (you can do that using the same form, link below).
Submissions have been closed and the solution for this challenge will be posted on Tuesday. Learn about our upcoming challenge on Wednesday.
PS: … and don’t forget to share the challenge with your friends and team members by clicking on one of the social icons below 👇