Set up Preconditions: Background, Scenario Outline, or Hooks

2021-11-15 21-11-15
Gherkin

Share!

 

Instead of the answer to the last week’s challenge, this week we’ll go a bit deeper into a recurring topic: setting up common preconditions.

Related scenarios often have a similar starting point. In the previous challenge, the shopping cart examples required a registered user. Repeating common set-up steps is rarely a good idea. It makes individual scenarios unnecessarily complex. Readers might start subconsciously skipping the opening lines of scenarios, expecting them to always be the same, and miss important differences. Updating common preconditions is also a challenge when the same steps are copied and pasted lots of times.

There are three good ways of moving common preconditions out of individual scenarios: scenario outlines, feature backgrounds and hooks. In this article, I’ll compare the three techniques and give you some guidelines on when to use which one.

Scenario outlines: shared structure

An outline allows us to group similar examples by writing the scenario only once and then listing the differences between examples in a table. For groups of examples that share a structure, this is a very good way of extracting common elements. Values that are the same in all the examples can remain in the scenario. In the following outline, all the examples require the same username, so we can specify it only once:

Scenario Outline: registered user ratings
 
Given user “mike99” exists in the system
And user “mike99” logs in
And the feedback scale is from 1 to 5
When the logged in user rates the app with <stars>
Then the feedback should be recorded as <rating>
And the feedback type should be “registered user”
 
Example:
| stars | rating     |
| 1     | Poor       |
| 2     | Bad        |
| 3     | Meh        |
| 4     | OK         |
| 5     | Excellent  |

Outlines allow us to also specify common postconditions in a single place. In the previous case, the feedback type is is the same for all the examples in the table, so it does not need to be repeated.

The big benefit of using outlines compared to the other two techniques is that they keep the preconditions close to the action and postconditions. Readers can easily see how the inputs affect the outputs.

In cases when examples share a common starting point, but they have different structures, scenario outlines are not useful. Grouping such examples into an outline often leads to blank table cells and overcomplicated scenarios.

If we wanted to introduce an specification for anonymous ratings, which would use the same feedback scale but not need a registered user, putting everything in a single outline would be a bad idea. Some scenario steps would apply only to certain examples, some table cells would be blank, and in general the outline would be difficult to read and maintain. It would be better to set up a different scenario outline with a different structure.

Feature backgrounds – shared context

For situations when we want to share setup across examples that have a different structure, it’s often better to use a Background section. All the Given/When/Then tools allow creating a common section for steps which the tool will automatically insert before each scenario in a feature file. This section is usually called Background, based on the keyword which starts it. The steps in the background section work as if they were copied and pasted into individual scenarios, but you don’t have to repeat them each time yourself.

The most common use for background sections is to set up common context for the entire feature file. For example, if we wanted to configure the rating scale once and then use it in multiple scenarios or outlines, we could do that with the following structure:

Feature: App ratings
 
Background:
Given the feedback scale is from 1 to 5
 
Scenario Outline: registered user ratings
 
Scenario Outline: anonymous user ratings

The rating scale is shared across both scenario outlines. Individual outlines are shorter and focused on their own purpose, but they are not constrained by a common structure.

Compared to keeping the common steps in scenario outlines, the background section is easier to modify but more difficult to read. People have to remember the contents of the background section when reading individual scenario outlines. For short feature files this is not an issue. Most people reading the anonymous feedback scenarios will remember that the rating scale is from 1 to 5. For long feature files, the gap between the background and the scenario may become a problem. Likewise, simple and short background sections, such as the previous one, are easy to remember. Long, complex backgrounds with lots of steps are difficult to understand and easy to forget.

When using background sections, avoid using them as a dumping ground for anything that later scenarios might need. Keep the common steps short and easy to remember.

Feature backgrounds are a simple solution for common preconditions, but they are relatively inflexible. A background applies to all scenarios in a file, and it only applies to the current feature file.

It’s not possible to selectively apply the background to some scenarios, and skip it for others. Complex feature files sometimes have groups of scenarios that require different set-ups. If we had another scenario outline requiring the “mike99” user, we could either choose to extract that step into the common background and apply it to everything, including anonymous visitor examples, or repeat it across multiple scenario outlines that actually need it. Both options have issues. The first would slow down testing and delay feedback for anonymous visitor examples unnecessarily. The second would complicate registered user examples more than necessary.

If different groups need different backgrounds, and it’s not possible to just use scenario outlines to group them, then perhaps split the specification into multiple feature files. Each file will be shorter, and it can use a more focused background section to prepare only the common aspects for its own scenarios.

If we wanted to use the same rating scale across multiple files, we would have to copy the same background step into all the files. This is not ideal, as it makes updating the shared set-up more difficult. Sometimes people try to use one feature file to set up the context for the others. Please don’t do that. First, it makes the problem of remembering the contextual data even harder, since the background may not even be in the same file as the scenarios. Second, it introduces an implied dependency on the sequence of execution between feature tests, which is bad for all the same reasons as the ones explored in the challenge How to fix a chain of dependent scenarios?

Hooks – shared actions

For sharing context across multiple files, or applying it selectively to some scenarios, Hooks are a better solution than background sections. Hooks are a feature of Specflow, and most other Given/When/Then tools, allowing developers to specify actions to execute before or after a feature, a scenario or even individual steps.

In the previous examples, the specific user wasn’t that important. We just needed a registered user to enter a rating. We could create a user before each scenario and remove the user after the scenario runs easily using hooks, without having to specify anything in the steps. Here is how that would work for C# and Specflow:

[BeforeScenario]
public static void SetUpUserForScenario()
{
//something to set up the user
}
[AfterScenario]
public static void DeleteUser() {
//something to remove the user
}
 

Because hooks are tool-specific, the usage syntax is different across programming languages. If you are not using Specflow, investigate how to achieve the same thing in the relevant documentation for your tool.

Hooks such as these are a great way to manage technical preconditions and clean-up actions. One trick I love to use with tests involving a database is to open up a transaction before each scenario, then roll it back after the scenario. This instantly makes tests isolated and repeatable. It also ensures that tests always start from the same point regardless of what happens in an individual scenario.

Unlike background sections, hooks can be shared across feature files. They can also be selectively applied to just some scenarios or features. The most common way of limiting hooks is to combine them with tags. Specflow calls this combination Scoped bindings. For example, the following piece of code would set up a hook that gets executed before each scenario, but only if it has the RequiresUser tag.

[BeforeScenario(“RequiresUser”)]
public static void SetUpUserForScenario()
{
//something to set up the user
}

We can then flag the scenarios or scenario outlines that require a user easily, such as the one below:

@RequiresUser
Scenario Outline: registered user ratings
 
When the logged in user rates the app with <stars>
Then the feedback should be recorded as <rating>
And the feedback type should be “registered user”
 
Examples:

Combining tags and hooks lets us selectively apply common preconditions, so it is by far the most flexible way of sharing context. However, adding or updating hooks requires changing code, so developers usually have to maintain hooks. Compared to scenario outlines and background sections, which anyone can edit, this makes it harder to collaborate.

Similarly, the context set up using hooks and tags is hidden from the readers of a feature file. Developers can easily find the code related to the @RequiresUser tag, but business representatives usually don’t have the means to do that. When using hooks, make sure to give the related tags meaningful names, so the readers can have a reasonable assumption about the underlying code.

Which technique should you use?

When deciding where to put some common context, the key question should be “is this important for the purpose of the scenario, or just for the process of testing?”. This is the same question we explored in the challenge How to fix a chain of dependent scenarios? (see the section “Write set-up in a declarative way”).

Tags and hooks are best to use for sharing context that’s not really important for the purpose of the test, but it may be important for how the test gets executed. For example, the actual username for the logged in user is not important for ratings, we just need to have a user and sign them in. Hooks fit in nicely into this situation.

Background sections and scenario outlines are best for information that is directly related to the purpose of a scenario. For example, the rating scale is used by individual examples, and a different scale would lead to different outcomes. Hiding the scale into a hook would make things confusing and difficult to maintain.

Outlines keep the set-up information close to the usage, so they are easier to understand than background sections. But they require all the examples to share the same structure.

Based on that, I would use the following:

  • Split anonymous feedback examples and registered user examples into two scenario outlines. In each group the examples have the same structure, but it is different across groups, so two scenario outlines fit this situation nicely.
  • Put the feedback scale into the feature file background, since it’s important for the outcomes of individual examples and the purpose of the feature.
  • Move the user registration and log-in actions to hooks, and use a tag to mark the scenario outlines that require a user.
Feature: App ratings
 
Background:
 
Given the feedback scale is from 1 to 5
 
@RequiresUser
Scenario Outline: registered user ratings
 
When the logged in user rates the app with <stars>
Then the feedback should be recorded as <rating>
And the feedback type should be “registered user”
 
Examples:
| stars | rating     |
| 1     | Poor       |
| 2     | Bad        |
| 3     | Meh        |
| 4     | OK         |
| 5     | Excellent  |
 
Scenario Outline: anonymous user ratings
 
With anonymous users, we have less confidence in discrete answers, so we want to group the responses into fewer buckets compared to registered users.
 
When a visitor rates the app with <stars>
Then the feedback should be recorded as <rating>
And the feedback type should be “anonymous”
| stars | rating     |
| 1     | Poor       |
| 2     | Bad        |
| 3     | Meh        |
| 4     | OK         |
| 5     | Excellent  |

Finally, here’s a quick summary for when to use which technique:

  • Use hooks to hide technical actions and coordination steps
  • Use scenario outlines to group examples that share the same structure, and put common data in the scenario instead of the example tables
  • Use background sections to expose contextual information for examples that do not share the same structure
  • If the background section becomes too complicated, consider splitting the feature into multiple files